large table starting sequence scan because of default_statistic_target

Поиск
Список
Период
Сортировка
От Janning Vygen
Тема large table starting sequence scan because of default_statistic_target
Дата
Msg-id 200903161415.59238.vygen@kicktipp.de
обсуждение исходный текст
Ответы Re: large table starting sequence scan because of default_statistic_target  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-general
Hi,

we are running a large 8.3 database and had some trouble with a default
statistic target. We had set it to one special table some time ago, when we
got a problem with a growing table starting with sequence scans.

Last week we did manually cluster this table (create table as ... order by;
drop table orig, rename table temp to orig ). Of course the statistic target
was dropped and we did not remember to set it again.

Why does default_statistic_target defaults to 10?

The documentation tells me, the only drawback is a longer ANALYZE run. we are
setting it to 100 in postgresql.conf and we did not see a much longer run of
ANALYZE. Of course, smaller tables won't need a setting of 100. But small
tables are usually not very interesting when it comes to performance.

With a setting of 10 you run into difficult problems if your table grows.
Suddenly an execution plan changes and you get sequence scans on your largest
table! We had such problems and it was annoying to have a real slow down just
because of this minor configuration parameter.

I suggest to setting it to 100 by default:
- no problems for small installations
- no problems for DBA who always adjust their system in every possible way.
- no problems for growing databases with unequal distributed data

But maybe there are some other reasons not setting it to a higher value. If
so, please tell me.

kind regards
Janning

В списке pgsql-general по дате отправления:

Предыдущее
От: Janning Vygen
Дата:
Сообщение: suggestion: log_statement = sample
Следующее
От: Luis Cevallos
Дата:
Сообщение: Imagenes