Обсуждение: Using LIMIT changes index used by planner

От:
Sven Willenberger
Дата:

I have a question regarding a serious performance hit taken when using a
LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB
of memory. The table in question contains some 25 million rows with a
bigserial primary key, orderdate index and a referrer index. The 2
select statements are as follow:

A) select storelocation,order_number from custacct where referrer = 1365
  and orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by
custacctid;

B) select storelocation,order_number from custacct where referrer = 1365
  and orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by
custacctid limit 10;

So the only difference is the use of the Limit, which, in theory, should
be quicker after custacctid is ordered.

Now the analyze results:

A) explain select storelocation,order_number from custacct where
referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid;

        QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Sort  (cost=904420.55..904468.11 rows=19025 width=44)
    Sort Key: custacctid
    ->  Index Scan using orderdate_idx on custacct
(cost=0.00..903068.29 rows=19025 width=44)
          Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp
without time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp
without time zone))
          Filter: (referrer = 1365)
(5 rows)

************************

B) explain select storelocation,order_number from custacct where
referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid limit 10;

                  QUERY PLAN


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=0.00..33796.50 rows=10 width=44)
    ->  Index Scan using custacct2_pkey on custacct
(cost=0.00..64297840.86 rows=19025 width=44)
          Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07
00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
12:00:00'::timestamp without time zone))
(3 rows)

*******************

Notice the huge cost difference in the two plans: 904468 in the one
without LIMIT versus 64297840.86 for the index scan on custacct index.
Why would the planner switch from using the orderdate index to the
custacct index (which is a BIGSERIAL, btw)?

I can change that behavior (and speed up the resultant query) by using
the following subquery:

explain select foo.storelocation, foo.order_number from (select
storelocation,order_number from custacct where referrer = 1365  and
orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by
custacctid) as foo  limit 10;

              QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=904420.55..904420.67 rows=10 width=100)
    ->  Subquery Scan foo  (cost=904420.55..904658.36 rows=19025 width=100)
          ->  Sort  (cost=904420.55..904468.11 rows=19025 width=44)
                Sort Key: custacctid
                ->  Index Scan using orderdate_idx on custacct
(cost=0.00..903068.29 rows=19025 width=44)
                      Index Cond: ((orderdate >= '2004-12-07
00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
12:00:00'::timestamp without time zone))
                      Filter: (referrer = 1365)
(7 rows)

As a side note, when running query A, the query takes 1772.523 ms, when
running the subselect version to get the limit, it takes 1415.615 ms.
Running option B (with the other index being scanned) takes several
minutes (close to 10 minutes!). What am I missing about how the planner
views the LIMIT statement?

Sven

От:
Andrew McMillan
Дата:

On Mon, 2004-12-13 at 01:13 -0500, Sven Willenberger wrote:
> I have a question regarding a serious performance hit taken when using a
> LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB
> of memory. The table in question contains some 25 million rows with a
> bigserial primary key, orderdate index and a referrer index. The 2
> select statements are as follow:

It's an interesting question, but to be able to get answers from this
list you will need to provide "EXPLAIN ANALYZE ..." rather than just
"EXPLAIN ...".

AFAICS the bad plan on LIMIT is because it optimistically thinks the
odds are around the 0.00 end, rather than the 64297840.86 end, and
indeed that is what the "Limit ..." estimate is showing.  A bad plan (in
your case) is encouraged here by the combination of "LIMIT" and "ORDER
BY".

For real background on this, and calculated recommendations, we'd need
that more detailed output though.

As a quick hack, it's possible that you could improve things by
increasing the samples on relevant columns with some judicious "ALTER
TABLE ... ALTER COLUMN ... SET STATISTICS ..." commands.

Cheers,
                    Andrew McMillan.

-------------------------------------------------------------------------
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/            PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201      MOB: +64(272)DEBIAN      OFFICE: +64(4)499-2267
                     Planning an election?  Call us!
-------------------------------------------------------------------------


Вложения
От:
Sven Willenberger
Дата:


Andrew McMillan wrote:
> On Mon, 2004-12-13 at 01:13 -0500, Sven Willenberger wrote:
>
>>I have a question regarding a serious performance hit taken when using a
>>LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB
>>of memory. The table in question contains some 25 million rows with a
>>bigserial primary key, orderdate index and a referrer index. The 2
>>select statements are as follow:
>
>
> It's an interesting question, but to be able to get answers from this
> list you will need to provide "EXPLAIN ANALYZE ..." rather than just
> "EXPLAIN ...".
>

A) Query without limit clause:
explain analyze select storelocation,order_number from custacct where
referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid;

        QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Sort  (cost=1226485.32..1226538.78 rows=21382 width=43) (actual
time=30340.322..30426.274 rows=21432 loops=1)
    Sort Key: custacctid
    ->  Index Scan using orderdate_idx on custacct
(cost=0.00..1224947.52 rows=21382 width=43) (actual
time=159.218..30196.686 rows=21432 loops=1)
          Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp
without time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp
without time zone))
          Filter: (referrer = 1365)
  Total runtime: 30529.151 ms
(6 rows)

************************************

A2) Same query run again, to see effect of caching:
explain analyze select storelocation,order_number from custacct where
referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid;

        QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Sort  (cost=1226485.32..1226538.78 rows=21382 width=43) (actual
time=1402.410..1488.395 rows=21432 loops=1)
    Sort Key: custacctid
    ->  Index Scan using orderdate_idx on custacct
(cost=0.00..1224947.52 rows=21382 width=43) (actual time=0.736..1259.964
rows=21432 loops=1)
          Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp
without time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp
without time zone))
          Filter: (referrer = 1365)
  Total runtime: 1590.675 ms
(6 rows)

***********************************

B) Query run with LIMIT

explain analyze select storelocation,order_number from custacct where
referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid limit 10;

                  QUERY PLAN


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=0.00..43065.76 rows=10 width=43) (actual
time=1306957.216..1307072.111 rows=10 loops=1)
    ->  Index Scan using custacct2_pkey on custacct
(cost=0.00..92083209.38 rows=21382 width=43) (actual
time=1306957.205..1307072.017 rows=10 loops=1)
          Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07
00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
12:00:00'::timestamp without time zone))
  Total runtime: 1307072.231 ms
(4 rows)

************************************

C) Query using the subselect variation

explain analyze select foo.storelocation, foo.order_number from (select
storelocation,order_number from custacct where referrer = 1365  and
orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by
custacctid) as foo  limit 10;

              QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Limit  (cost=1226485.32..1226485.45 rows=10 width=100) (actual
time=1413.829..1414.024 rows=10 loops=1)
    ->  Subquery Scan foo  (cost=1226485.32..1226752.60 rows=21382
width=100) (actual time=1413.818..1413.933 rows=10 loops=1)
          ->  Sort  (cost=1226485.32..1226538.78 rows=21382 width=43)
(actual time=1413.798..1413.834 rows=10 loops=1)
                Sort Key: custacctid
                ->  Index Scan using orderdate_idx on custacct
(cost=0.00..1224947.52 rows=21382 width=43) (actual time=0.740..1272.380
rows=21432 loops=1)
                      Index Cond: ((orderdate >= '2004-12-07
00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
12:00:00'::timestamp without time zone))
                      Filter: (referrer = 1365)
  Total runtime: 1418.964 ms
(8 rows)


Thanks,
Sven

От:
Tom Lane
Дата:

Sven Willenberger <> writes:
> explain analyze select storelocation,order_number from custacct where
> referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
> 12:00:00' order by custacctid limit 10;

>                   QUERY PLAN

>
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   Limit  (cost=0.00..43065.76 rows=10 width=43) (actual
> time=1306957.216..1307072.111 rows=10 loops=1)
>     ->  Index Scan using custacct2_pkey on custacct
> (cost=0.00..92083209.38 rows=21382 width=43) (actual
> time=1306957.205..1307072.017 rows=10 loops=1)
>           Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07
> 00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
> 12:00:00'::timestamp without time zone))
>   Total runtime: 1307072.231 ms
> (4 rows)

I think this is the well-known issue of lack of cross-column correlation
statistics.  The planner is well aware that this indexscan will be
horridly expensive if run to completion --- but it's assuming that
stopping after 10 rows, or 10/21382 of the total scan, will cost only
about 10/21382 as much as the whole scan would.  This amounts to
assuming that the rows matching the filter condition are randomly
distributed among all the rows taken in custacctid order.  I suspect
that your test case actually has a great deal of correlation between
custacctid and referrer/orderdate, such that the indexscan in custacctid
order ends up fetching many more rows that fail the filter condition
than random chance would suggest, before it finally comes across 10 that
pass the filter.

There isn't any near-term fix in the wind for this, since storing
cross-column statistics is an expensive proposition that we haven't
decided how to handle.  Your workaround with separating the ORDER BY
from the LIMIT is a good one.

            regards, tom lane

От:
Sven Willenberger
Дата:

On Mon, 2004-12-13 at 17:43 -0500, Tom Lane wrote:
> Sven Willenberger <> writes:
> > explain analyze select storelocation,order_number from custacct where
> > referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
> > 12:00:00' order by custacctid limit 10;
>
> >                   QUERY PLAN
>
> >
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> >   Limit  (cost=0.00..43065.76 rows=10 width=43) (actual
> > time=1306957.216..1307072.111 rows=10 loops=1)
> >     ->  Index Scan using custacct2_pkey on custacct
> > (cost=0.00..92083209.38 rows=21382 width=43) (actual
> > time=1306957.205..1307072.017 rows=10 loops=1)
> >           Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07
> > 00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07
> > 12:00:00'::timestamp without time zone))
> >   Total runtime: 1307072.231 ms
> > (4 rows)
>
> I think this is the well-known issue of lack of cross-column correlation
> statistics.  The planner is well aware that this indexscan will be
> horridly expensive if run to completion ---
> <snip>
> There isn't any near-term fix in the wind for this, since storing
> cross-column statistics is an expensive proposition that we haven't
> decided how to handle.  Your workaround with separating the ORDER BY
> from the LIMIT is a good one.
>

You are correct in that there is a high degree of correlation between
the custacctid (which is a serial key) and the orderdate as the orders
generally get entered in the order that they arrive. I will go with the
workaround subselect query plan then.

On a related note, is there a way (other than set enable_seqscan=off) to
give a hint to the planner that it is cheaper to use and index scan
versus seq scan? Using the "workaround" query on any time period greater
than 12 hours results in the planner using a seq scan. Disabling the seq
scan and running the query on a full day period for example shows:

explain analyze select foo.storelocaion, foo.order_number from (select
storelocation,order_number from custacct where referrer = 1365  and
ordertdate between '2004-12-09' and '2004-12-10' order by custacctid) as
foo  limit 10 offset 100;

QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=2661326.22..2661326.35 rows=10 width=100) (actual
time=28446.605..28446.796 rows=10 loops=1)
   ->  Subquery Scan foo  (cost=2661324.97..2661866.19 rows=43297
width=100) (actual time=28444.916..28446.298 rows=110 loops=1)
         ->  Sort  (cost=2661324.97..2661433.22 rows=43297 width=41)
(actual time=28444.895..28445.334 rows=110 loops=1)
               Sort Key: custacctid
               ->  Index Scan using orderdate_idx on custacct
(cost=0.00..2657990.68 rows=43297 width=41) (actual
time=4.432..28145.212 rows=44333 loops=1)
                     Index Cond: ((orderdate >= '2004-12-09
00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-10
00:00:00'::timestamp without time zone))
                     Filter: (referrer = 1365)
 Total runtime: 28456.893 ms
(8 rows)


If I interpret the above correctly, the planner guestimates a cost of
2661326 but the actual cost is much less (assuming time is equivalent to
cost). Would the set statistics command be of any benefit here in
"training" the planner?

Sven




От:
Tom Lane
Дата:

Sven Willenberger <> writes:
> On a related note, is there a way (other than set enable_seqscan=off) to
> give a hint to the planner that it is cheaper to use and index scan
> versus seq scan?

There are basically two things you can do.  One: if the planner's
rowcount estimates are badly off, you can try increasing the stats
targets for relevant columns in hopes of making the estimates better.
A too-large rowcount estimate will improperly bias the decision towards
seqscan.  Two: if the rowcounts are in the right ballpark but the
estimated costs have nothing to do with reality, you can try tuning
the planner's cost parameters to make the model match local reality
a bit better.  random_page_cost is the grossest knob here;
effective_cache_size is also worth looking at.  See the
pgsql-performance archives for more discussion.

>                ->  Index Scan using orderdate_idx on custacct
> (cost=0.00..2657990.68 rows=43297 width=41) (actual
> time=4.432..28145.212 rows=44333 loops=1)

In this case there's already a pretty good match between actual and
estimated rowcount, so increasing the stats targets isn't likely to
improve the plan choice; especially since a more accurate estimate would
shift the costs in the "wrong" direction anyway.  Look to the cost
parameters, instead.

Standard disclaimer: don't twiddle the cost parameters on the basis
of only one test case.

            regards, tom lane

От:
Pierre-Frédéric Caillaud
Дата:

On Mon, 13 Dec 2004 17:43:07 -0500, Tom Lane <> wrote:

> Sven Willenberger <> writes:
>> explain analyze select storelocation,order_number from custacct where
>> referrer = 1365  and orderdate between '2004-12-07' and '2004-12-07
>> 12:00:00' order by custacctid limit 10;

    why not create an index on referrer, orderdate ?