Re: significant slow down with various LIMIT

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: significant slow down with various LIMIT
Дата
Msg-id 4BC34A340200002500030759@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: significant slow down with various LIMIT  (norn <andrey.perliev@gmail.com>)
Список pgsql-performance
norn <andrey.perliev@gmail.com> wrote:

> I restarted Postgresql with new settings and got no performance
> improvements in this particular query...

The cost settings help the optimizer make good decisions about plan
choice.  I guess I don't have much reason to believe, at this point,
that there is a better plan for it to choose for this query.  Do you
think you see one?  What would that be?  (We might be able to force
that plan and find out if you're right, which can be a valuable
diagnostic step, even if the way it gets forced isn't a
production-quality solution.)

Are you able to share the table descriptions?  (That might help us
suggest an index or some such which might help.)

> Do you have ideas how much random_page_cost and seq_page_cost
> should be decreased?

It really depends on how much of your active data set is cached.  If
it is effectively fully cached, you might want to go to 0.01 for
both (or even lower).  Many of our databases perform best with
seq_page_cost = 1 and random_page_cost = 2.  With some, either of
those "extremes" causes some queries to optimize poorly, and we've
had luck with 0.3 and 0.5.  This is one worth testing with your
workload, because you can make some queries faster at the expense of
others; sometimes it comes down to which needs better response time
to keep your users happy.

-Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Sergey Konoplev
Дата:
Сообщение: Re: [PERFORM] How to diagnose a “context-switching ” storm problem ?
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: How to diagnose a *context-switching * storm problem ?