Обсуждение: BUG #3657: Performance leaks when using between of two equal dates

Поиск
Список
Период
Сортировка

BUG #3657: Performance leaks when using between of two equal dates

От
"Tiago Daniel Jacobs"
Дата:
The following bug has been logged online:

Bug reference:      3657
Logged by:          Tiago Daniel Jacobs
Email address:      tiago@mdtestudio.com.br
PostgreSQL version: 8.2.4
Operating system:   Linux
Description:        Performance leaks when using between of two equal dates
Details:

Hi, please analyze this case. If the type of the field is date, and the
dates used in between are equals, why not "convert" to equal operator
internally?

type of data_dt_Data is date

-- The slow way
EXPLAIN ANALYZE select * from agregados.agreg_sig where data_dt_Data between
'20070901'  and '20070901';

"Result  (cost=0.00..29.05 rows=4 width=84) (actual time=7.146..5234.762
rows=178866 loops=1)"
"  ->  Append  (cost=0.00..29.05 rows=4 width=84) (actual
time=7.144..5052.830 rows=178866 loops=1)"
"        ->  Seq Scan on agreg_sig  (cost=0.00..20.35 rows=3 width=84)
(actual time=0.000..0.000 rows=0 loops=1)"
"              Filter: ((data_dt_data >= '2007-09-01'::date) AND
(data_dt_data <= '2007-09-01'::date))"
"        ->  Index Scan using idx_agreg_sig_2007_09__data_dt_data on
agreg_sig_2007_09 agreg_sig  (cost=0.00..8.70 rows=1 width=59) (actual
time=7.143..4924.607 rows=178866 loops=1)"
"              Index Cond: ((data_dt_data >= '2007-09-01'::date) AND
(data_dt_data <= '2007-09-01'::date))"
"Total runtime: 5298.566 ms"



-- The fast way
EXPLAIN ANALYZE select * from agregados.agreg_sig where data_dt_Data =
'20070901' ;

"Result  (cost=0.00..65148.76 rows=157754 width=84) (actual
time=38.911..548.142 rows=178866 loops=1)"
"  ->  Append  (cost=0.00..65148.76 rows=157754 width=84) (actual
time=38.907..385.408 rows=178866 loops=1)"
"        ->  Seq Scan on agreg_sig  (cost=0.00..18.62 rows=3 width=84)
(actual time=0.001..0.001 rows=0 loops=1)"
"              Filter: (data_dt_data = '2007-09-01'::date)"
"        ->  Bitmap Heap Scan on agreg_sig_2007_09 agreg_sig
(cost=2607.25..65130.14 rows=157751 width=59) (actual time=38.904..271.161
rows=178866 loops=1)"
"              Recheck Cond: (data_dt_data = '2007-09-01'::date)"
"              ->  Bitmap Index Scan on idx_agreg_sig_2007_09__data_dt_data
(cost=0.00..2567.81 rows=157751 width=0) (actual time=28.348..28.348
rows=178866 loops=1)"
"                    Index Cond: (data_dt_data = '2007-09-01'::date)"
"Total runtime: 604.238 ms"

Best Regards,
Tiago

Re: BUG #3657: Performance leaks when using between of two equal dates

От
Alvaro Herrera
Дата:
Tiago Daniel Jacobs wrote:

> type of data_dt_Data is date
>
> -- The slow way
> EXPLAIN ANALYZE select * from agregados.agreg_sig where data_dt_Data between
> '20070901'  and '20070901';
>
> "Result  (cost=0.00..29.05 rows=4 width=84) (actual time=7.146..5234.762
> rows=178866 loops=1)"
> "  ->  Append  (cost=0.00..29.05 rows=4 width=84) (actual
> time=7.144..5052.830 rows=178866 loops=1)"
> "        ->  Seq Scan on agreg_sig  (cost=0.00..20.35 rows=3 width=84)
> (actual time=0.000..0.000 rows=0 loops=1)"
> "              Filter: ((data_dt_data >= '2007-09-01'::date) AND
> (data_dt_data <= '2007-09-01'::date))"
> "        ->  Index Scan using idx_agreg_sig_2007_09__data_dt_data on
> agreg_sig_2007_09 agreg_sig  (cost=0.00..8.70 rows=1 width=59) (actual
> time=7.143..4924.607 rows=178866 loops=1)"
> "              Index Cond: ((data_dt_data >= '2007-09-01'::date) AND
> (data_dt_data <= '2007-09-01'::date))"
> "Total runtime: 5298.566 ms"

Please do ANALYZE agregados.agreg_sig and try the query again.  The
indexscan is grossly misestimated.

FWIW, for performance questions you should be using the
pgsql-performance list, not the bug form.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: BUG #3657: Performance leaks when using between of two equal dates

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tiago Daniel Jacobs wrote:
>> "        ->  Index Scan using idx_agreg_sig_2007_09__data_dt_data on
>> agreg_sig_2007_09 agreg_sig  (cost=0.00..8.70 rows=1 width=59) (actual
>> time=7.143..4924.607 rows=178866 loops=1)"
>> "              Index Cond: ((data_dt_data >= '2007-09-01'::date) AND
>> (data_dt_data <= '2007-09-01'::date))"

> Please do ANALYZE agregados.agreg_sig and try the query again.  The
> indexscan is grossly misestimated.

Not sure that it's ANALYZE's fault.  Since we currently use the same
selectivity estimators for > and >= (and likewise for < and <=), we
have no hope of getting edge cases correct.  Most of the time the
stats are crude enough that it doesn't matter, but sometimes the
edge value is common and then it does matter.  I've been wondering
if it would be worth the trouble to introduce scalarlesel and
scalargesel estimators ...

            regards, tom lane

Re: BUG #3657: Performance leaks when using between of two equal dates

От
Tiago Daniel Jacobs
Дата:
I run analyze on the database every day.

And before post this bug-report, a VACUUM FULL, REINDEX, ANALYZE on
entire DB.

Note that the type of field is date, so, between two equal values is
exactly the same that equal operator.

On my system I made a check if the two dates are equal then rewrite SQL
code, but I think that the big portion of users don't do this.

I would like to see it corrected, and help to make postgreSQL better.

Please ask me if need more information.

PS: I consider this as a BUG, since the query simply don't return. And
for end users is obvious that it work as the same as "="  operator.
But, if it is not a BUG, i'm so sorry!

Best Regards,
Tiago

Tom Lane escreveu:

  Alvaro Herrera <alvherre@commandprompt.com> writes:


    Tiago Daniel Jacobs wrote:


      "        ->  Index Scan using idx_agreg_sig_2007_09__data_dt_data on
agreg_sig_2007_09 agreg_sig  (cost=0.00..8.70 rows=1 width=59) (actual
time=7.143..4924.607 rows=178866 loops=1)"
"              Index Cond: ((data_dt_data >= '2007-09-01'::date) AND
(data_dt_data <= '2007-09-01'::date))"






    Please do ANALYZE agregados.agreg_sig and try the query again.  The
indexscan is grossly misestimated.



Not sure that it's ANALYZE's fault.  Since we currently use the same
selectivity estimators for > and >= (and likewise for < and <=), we
have no hope of getting edge cases correct.  Most of the time the
stats are crude enough that it doesn't matter, but sometimes the
edge value is common and then it does matter.  I've been wondering
if it would be worth the trouble to introduce scalarlesel and
scalargesel estimators ...

            regards, tom lane

Re: BUG #3657: Performance leaks when using between of two equal dates

От
"Kevin Grittner"
Дата:
>>> On Sat, Oct 6, 2007 at 11:28 AM, in message <24828.1191688097@sss.pgh.p=
a.us>,
Tom Lane <tgl@sss.pgh.pa.us> wrote:=20
> I've been wondering
> if it would be worth the trouble to introduce scalarlesel and
> scalargesel estimators ...
=20
FWIW, this is an issue we ran into:
=20
http://archives.postgresql.org/pgsql-performance/2006-03/msg00211.php
=20
We worked around it in our framework by checking for a range with equal
values and converting to an equality test before presenting it to the
database.
=20
-Kevin
=20

Re: BUG #3657: Performance leaks when using between of two equal dates

От
Tiago Daniel Jacobs
Дата:
I`m sorry, but why work around?

The overhead of "99%" of cases is so bigger? What about an "option"
like constraint exclusion...

I've worked around also, but would like to see this topic in a TODO or
further implementation.

Thanks by patience.



Kevin Grittner escreveu:




        On Sat, Oct 6, 2007 at 11:28 AM, in message <24828.1191688097@sss.pgh.pa.us>,




  Tom Lane <tgl@sss.pgh.pa.us> wrote:


    I've been wondering
if it would be worth the trouble to introduce scalarlesel and
scalargesel estimators ...



FWIW, this is an issue we ran into:

http://archives.postgresql.org/pgsql-performance/2006-03/msg00211.php

We worked around it in our framework by checking for a range with equal
values and converting to an equality test before presenting it to the
database.

-Kevin