Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

Поиск
Список
Период
Сортировка
От Mischa Sandberg
Тема Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Дата
Msg-id 1114580284.426f253cc0087@webmail.telus.net
обсуждение исходный текст
Ответ на Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Andrew Dunstan <andrew@dunslane.net>)
Ответы Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Список pgsql-hackers
Quoting Andrew Dunstan <andrew@dunslane.net>:

> After some more experimentation, I'm wondering about some sort of
> adaptive algorithm, a bit along the lines suggested by Marko
Ristola, but limited to 2 rounds.
>
> The idea would be that we take a sample (either of fixed size, or
> some  small proportion of the table) , see how well it fits a larger
sample
> > (say a few times the size of the first sample), and then adjust
the > formula accordingly to project from the larger sample the
estimate for the full population. Math not worked out yet - I think we
want to ensure that the result remains bounded by [d,N].

Perhaps I can save you some time (yes, I have a degree in Math). If I
understand correctly, you're trying extrapolate from the correlation
between a tiny sample and a larger sample. Introducing the tiny sample
into any decision can only produce a less accurate result than just
taking the larger sample on its own; GIGO. Whether they are consistent
with one another has no relationship to whether the larger sample
correlates with the whole population. You can think of the tiny sample
like "anecdotal" evidence for wonderdrugs.
--
"Dreams come true, not free." -- S.Sondheim, ITW


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Christopher Kings-Lynne
Дата:
Сообщение: Re: Disable large objects GUC
Следующее
От: Greg Stark
Дата:
Сообщение: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?