Re: Overhauling GUCS

Поиск
Список
Период
Сортировка
От Gregory Stark
Тема Re: Overhauling GUCS
Дата
Msg-id 87d4mqwdm8.fsf@oxford.xeocode.com
обсуждение исходный текст
Ответ на Re: Overhauling GUCS  (Josh Berkus <josh@agliodbs.com>)
Ответы Re: Overhauling GUCS
Re: Overhauling GUCS
Список pgsql-hackers
"Josh Berkus" <josh@agliodbs.com> writes:

> Where analyze does systematically fall down is with databases over 500GB in
> size, but that's not a function of d_s_t but rather of our tiny sample size.

Speak to the statisticians. Our sample size is calculated using the same
theory behind polls which sample 600 people to learn what 250 million people
are going to do on election day. You do NOT need (significantly) larger
samples for larger populations.

In fact where those polls have difficulty is the same place we have some
problems. For *smaller* populations like individual congressional races you
need to have nearly the same 600 sample for each of those small races. That
adds up to a lot more than 600 total. In our case it means when queries cover
a range much less than a whole bucket then the confidence interval increases
too.

Also, our estimates for n_distinct are very unreliable. The math behind
sampling for statistics just doesn't work the same way for properties like
n_distinct. For that Josh is right, we *would* need a sample size proportional
to the whole data set which would practically require us to scan the whole
table (and have a technique for summarizing the results in a nearly constant
sized data structure).

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support!


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Zdenek Kotala
Дата:
Сообщение: Re: handling TOAST tables in autovacuum
Следующее
От: Gregory Stark
Дата:
Сообщение: Re: pg_dump restore time and Foreign Keys