Re: Slow query using the Cube contrib module.

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Slow query using the Cube contrib module.
Дата
Msg-id 16994.1280269130@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Slow query using the Cube contrib module.  (Yeb Havinga <yebhavinga@gmail.com>)
Список pgsql-performance
Yeb Havinga <yebhavinga@gmail.com> writes:
> Liviu Mirea-Ghiban wrote:
>> My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond?
>> I've executed dozens of such queries and not once did the rechecking
>> remove any rows. Is there any way to disable it, or do you have any
>> other suggestions for optimizations (because I'm all out of ideas)?

> It's probably because the index nodes store data values with a lossy
> compression, which means that the index scan returns more rows than
> wanted, and that in turn is filtered out by the rescanning.

The recheck expression is only executed if the index reports that it's
not executed the search exactly.  If you don't see any difference
between the indexscan and bitmapscan output counts, it's probably
because the index can do the case exactly, so the recheck expression
isn't really getting used.  The planner has to include the expression
in the plan anyway, because the decision about lossiness is not known
until runtime.  But it's not costing any runtime.

The OP is mistaken to think there's anything wrong with this plan choice
---- more than likely, it's the best available plan.  The reason there's
a significant gap between the indexscan runtime and the bitmapscan
runtime is that that's the cost of going and actually fetching all those
rows from the table.  The only way to fix that is to buy a faster disk
or get more RAM so that more of the table can be held in memory.

            regards, tom lane

В списке pgsql-performance по дате отправления:

Предыдущее
От: Josh Berkus
Дата:
Сообщение: Re: how to handle a big table for data log
Следующее
От: Peter Hussey
Дата:
Сообщение: Questions on query planner, join types, and work_mem