Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

Поиск
Список
Период
Сортировка
От Andrew Dunstan
Тема Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Дата
Msg-id 426D565E.8040400@dunslane.net
обсуждение исходный текст
Ответ на Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Josh Berkus <josh@agliodbs.com>)
Ответы Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Mischa Sandberg <mischa.sandberg@telus.net>)
Список pgsql-hackers

Josh Berkus wrote:

>Simon, Tom:
>
>While it's not possible to get accurate estimates from a fixed size sample, I
>think it would be possible from a small but scalable sample: say, 0.1% of all
>data pages on large tables, up to the limit of maintenance_work_mem.
>
>Setting up these samples as a % of data pages, rather than a pure random sort,
>makes this more feasable; for example, a 70GB table would only need to sample
>about 9000 data pages (or 70MB).  Of course, larger samples would lead to
>better accuracy, and this could be set through a revised GUC (i.e.,
>maximum_sample_size, minimum_sample_size).
>
>I just need a little help doing the math ... please?
>
>


After some more experimentation, I'm wondering about some sort of
adaptive algorithm, a bit along the lines suggested by Marko Ristola,
but limited to 2 rounds.

The idea would be that we take a sample (either of fixed size, or some
small proportion of the table) , see how well it fits a larger sample
(say a few times the size of the first sample), and then adjust the
formula accordingly to project from the larger sample the estimate for
the full population. Math not worked out yet - I think we want to ensure
that the result remains bounded by [d,N].

cheers

andrew



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Constant WAL replay
Следующее
От: "Dave Held"
Дата:
Сообщение: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?