Re: estimating # of distinct values

Поиск
Список
Период
Сортировка
От tv@fuzzy.cz
Тема Re: estimating # of distinct values
Дата
Msg-id d7cb4a682509456a0a51a994cfdc138c.squirrel@sq.gransy.com
обсуждение исходный текст
Ответ на Re: estimating # of distinct values  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Ответы Re: estimating # of distinct values  (Josh Berkus <josh@agliodbs.com>)
Список pgsql-hackers
> <tv@fuzzy.cz> wrote:
>
>> So even with 10% of the table, there's a 10% probability to get an
>> estimate that's 7x overestimated or underestimated. With lower
>> probability the interval is much wider.
>
> Hmmm...  Currently I generally feel I'm doing OK when the estimated
> rows for a step are in the right order of magnitude -- a 7% error
> would be a big improvement in most cases.  Let's not lose track of
> the fact that these estimates are useful even when they are not
> dead-on accurate.

Well, but that's not 7%, thats 7x! And the theorem says 'greater or equal'
so this is actually the minimum - you can get a much bigger difference
with lower probability. So you can easily get an estimate that is a few
orders off.

Anyway I really don't want precise values, just a reasonable estimate. As
I said, we could use the AE estimate they proposed in the paper. It has
the nice feature that it actually reaches the low boundary (thus the
inequality changes to equality). The downside is that there are estimators
with better behavior on some datasets.

Tomas



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: page compression
Следующее
От: Joel Jacobson
Дата:
Сообщение: pg_dump --split patch