Re: slow bitmap heap scans on pg 9.2

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: slow bitmap heap scans on pg 9.2
Дата
Msg-id CAMkU=1ykX8A36=n3BO9u8aGuJtRHB+axRWoEEsorMd=f6rgN5A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: slow bitmap heap scans on pg 9.2  (Steve Singer <ssinger@ca.afilias.info>)
Ответы Re: slow bitmap heap scans on pg 9.2
Список pgsql-performance
On Sat, Apr 13, 2013 at 9:14 AM, Steve Singer <ssinger@ca.afilias.info> wrote:

    
indexTotalCost += index->pages * spc_random_page_cost / 100000.0;

Is driving my high costs on the inner loop. The index has 2-5 million pages depending on the partition .   If I run this against 9.2.2 with / 10000.0 the estimate is even higher.

If I try this with this with the

*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;

from 9.3 and I play I can make this work I can it pick the plan on some partitions with product_id=2 but not product_id=1.   If I remove the fudge-factor cost adjustment line I get the nested-loop plan always.

That was only temporarily the formula during 9.3dev.  Tom re-did that entire part of the code rather substantially in the current tip of 9.3 (commit 31f38f28b00cbe2b).  Now it is based on the number of tuples, and the height, rather than pages, and is multiplied by the cpu_operator_cost not the random_page_cost.

descentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;

...

descentCost = (index->tree_height + 1) * 50.0 * cpu_operator_cost;

 

Breaking the index into smaller partial indexes for each year seems to be giving me the plans I want with random_page_cost=2 (I might also try partial indexes on the month).

Even with the 9.3 log based fudge-factor we are seeing the fudge-factor being big enough so that the planner is picking a table scan over the index.

Have you tried it under 9.3 HEAD, rather than just back-porting the temporary
*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;
code into 9.2?

If you are trying to make your own private copy of 9.2, then removing the fudge factor altogether is probably the way to go.  But if you want to help improve future versions, you probably need to test with the most up-to-date dev version.

 
 A lot of loop iterations can be satisfied by cached pages of the index the fudge-factor doesn't really account for this.


Setting random_page_cost to 2 is already telling it that most of fetches are coming from the cache.  Of course for the upper blocks of an index even more than "most" are likely to be, but the latest dev code takes care of that.

Cheers,

Jeff

В списке pgsql-performance по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: Segment best size
Следующее
От: Rikard Pavelic
Дата:
Сообщение: Re: limit is sometimes not pushed in view with order