Re: slower merge join on sorted data chosen over

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: slower merge join on sorted data chosen over
Дата
Msg-id s345c3c4.063@gwmta.wicourts.gov
обсуждение исходный текст
Список pgsql-hackers
I am absolutely sure that I never changed any of the enable_xxx
settings other than enable_mergejoin during these tests.  The only
other setting I played with was random_page_cost, but the nested
loop query always chose the plain index scan in my tests.
There was one odd thing.  I started testing on one machine, then
we dumped it and loaded it onto a new machine, to try a slightly
different configuration for comparison.  I re-ran the tests and
noticed that on both boxes, with no enable_xxx options turned off,
when I reduced the random_page_cost to 1.2 the merge case went
to a plain index scan rather than the bitmap scan.  What was odd was
that on the original box, the bitmap scan version was consistently
faster, while on the new box it was slightly slower.  I repeated the
runs a few times because I found that surprising, but it seemed to
hold.  That seems more likely to be related to the density of the
index pages following the restore than to the difference in
hardware or OS.  It wasn't a very dramatic difference either way.

I notice that on the fully cached runs, the actual time for the plain
index was (barely) faster than the bitmap heap scan:              ->  Bitmap Heap Scan on "DbTranRepository" dtr
(cost=297.07..47081.47rows=25067 width=17) (actual time=30.432..427.463 rows=39690 loops=1)        ->  Index Scan using
"DbTranRepository_timestamp"on "DbTranRepository" dtr  (cost=0.00..49419.45 rows=25067 width=17) (actual
time=0.083..412.268rows=39690 loops=1) 
However, it doesn't seem to be predicting that result when you look
at the cost numbers, so it is a mystery.  Is there anything I can set to
tell it to spit out the cost info for every path it considers, so we can
see why it is making this choice?  It doesn't bias toward a lower
starting cost when it is going to use a nested index scan, does it?
(OK, "starting cost" is probably not the right argot -- I'm talking about
the "0.00" in "cost=0.00..49419.45".)
As for expectations -- an experienced programmer who knew the
schema, the data, and had just written the queries to retrieve data,
was testing the application with various selection criteria, and
saying something like "This one should be sub-second. Check.
This one should take a few seconds.  Check.  Here's a big one, this'll
probably take two or three minutes.  Dang -- it took seven and a half
minutes.  Could you look at this and see why it's so slow?  It seems
like it should be able to use this index and get the job done faster."
Some might call this "gut feel"; others might describe it as putting
forward a series of hypotheses, testing each, and investigating
unexpected results.    ;-)
The good news is that we don't think this sort of query will be
chosen by the users very often -- a few times a month, perhaps,
among the roughly 200 management-level people who will have
access to this application.  An extra five minutes per query isn't a
deal breaker, although it would obviously be nice if it ran faster.
I'll review the archives for prior discussions of the problem.
-Kevin
>>> Tom Lane <tgl@sss.pgh.pa.us> 10/06/05 9:28 PM >>>
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> In both the 8.1beta2 and using a build from this morning's
> dev snapshot, this query ran slower than expected:

There's a known issue that the planner tends to overestimate the cost of
inner-index-scan nestloops, because it doesn't allow for the strong
caching effects associated with repeated scans of the same index (in
particular, that all the upper index levels tend to stay in cache).
See the archives for past inconclusive discussions about how to fix
this.

However, something else caught my eye:

>                ->  Bitmap Heap Scan on "DbTranRepository" dtr  (cost=297.07..47081.47 rows=25067 width=17) (actual
time=69.056..5560.895rows=39690 loops=1) 

>          ->  Index Scan using "DbTranRepository_timestamp" on "DbTranRepository" dtr  (cost=0.00..49419.45 rows=25067
width=17)(actual time=33.625..11510.723 rows=39690 loops=1) 

I don't understand why the second case chose a plain index scan when
there was no need for sorted output; the bitmap scan is faster both
per cost estimate and in reality.  Are you sure you turned off only
enable_mergejoin and not enable_bitmapscan?

Also, when you say "slower than expected", what is setting your
expectation?
        regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Premsun Choltanwanich"
Дата:
Сообщение: How to delete Large Object from Database?
Следующее
От: "Jonah H. Harris"
Дата:
Сообщение: Re: Vote needed: revert beta2 changes or not?