Re: random_page_cost = 2.0 on Heroku Postgres

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: random_page_cost = 2.0 on Heroku Postgres
Дата
Msg-id CAEYLb_WvD6gyibab7w=tCF4dQ7qD5AQjxGF348gZJM+r=oNhJQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: random_page_cost = 2.0 on Heroku Postgres  (Peter van Hardenberg <pvh@pvh.ca>)
Список pgsql-performance
On 12 February 2012 22:28, Peter van Hardenberg <pvh@pvh.ca> wrote:
> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN
> ANALYZE 0.1% of all queries that run on our platform we could look for
> bad choices by the planner. I think the potential here could be quite
> remarkable.

Tom Lane suggested that plans, rather than the query tree, might be a
more appropriate thing for the new pg_stat_statements to be hashing,
as plans should be directly blamed for execution costs. While I don't
think that that's appropriate for normalisation (consider that there'd
often be duplicate pg_stat_statements entries per query), it does seem
like an idea that could be worked into a future revision, to detect
problematic plans. Maybe it could be usefully combined with
auto_explain or something like that (in a revision of auto_explain
that doesn't necessarily explain every plan, and therefore doesn't pay
the considerable overhead of that instrumentation across the board).

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

В списке pgsql-performance по дате отправления:

Предыдущее
От: Peter van Hardenberg
Дата:
Сообщение: Re: random_page_cost = 2.0 on Heroku Postgres
Следующее
От: CSS
Дата:
Сообщение: Re: rough benchmarks, sata vs. ssd