Re: ANALYZE sampling is too good

Поиск
Список
Период
Сортировка
От Gavin Flower
Тема Re: ANALYZE sampling is too good
Дата
Msg-id 52A8D5B0.6080704@archidevsys.co.nz
обсуждение исходный текст
Ответ на Re: ANALYZE sampling is too good  (Gavin Flower <GavinFlower@archidevsys.co.nz>)
Ответы Re: ANALYZE sampling is too good
Список pgsql-hackers
On 12/12/13 09:12, Gavin Flower wrote:
> On 12/12/13 08:39, Gavin Flower wrote:
>> On 12/12/13 08:31, Kevin Grittner wrote:
>>> Gavin Flower <GavinFlower@archidevsys.co.nz> wrote:
>>>
>>>> For example, assume 1000 rows of 200 bytes and 1000 rows of 20 bytes,
>>>> using 400 byte pages.  In the pathologically worst case, assuming
>>>> maximum packing density and no page has both types: the large rows 
>>>> would
>>>> occupy  500 pages and the smaller rows 50 pages. So if one selected 11
>>>> pages at random, you get about 10 pages of large rows and about one 
>>>> for
>>>> small rows!
>>> With 10 * 2 = 20 large rows, and 1 * 20 = 20 small rows.
>>>
>>> -- 
>>> Kevin Grittner
>>> EDB: http://www.enterprisedb.com
>>> The Enterprise PostgreSQL Company
>> Sorry, I've simply come up with well argued nonsense!
>>
>> Kevin, you're dead right.
>>
>>
>> Cheers,
>> Gavin
>>
>>
> I looked at:
> http://www.postgresql.org/docs/current/interactive/storage-page-layout.html 
>
> this says that each row has an overhead, which suggests there should 
> be a bias towards small rows.
>
> There must be a lot of things going on, that I'm simply not aware of, 
> that affect sampling bias...
>
>
> Cheers,
> Gavin
>
>
Ignoring overheads per row and other things... There will be a biasing 
affect when the distribution of sizes is not symmetric.  For example: 
when the majority of rows have sizes greater than the arithmetic mean, 
then most samples will be biased towards larger rows. Similarly there 
could be a bias towards smaller rows when most rows are smaller than the 
arithmetic mean.  Yes, I did think about this in depth - but it is way 
too complicated to attempt to quantify the bias, because it depends on 
too many factors (even just limiting it to the distribution of row sizes).

So apart from the nature of volatility of the table, and the pattern of 
insertions/updates/deletes - there there will be a bias depending on the 
distribution of values in the table.

So I despair, that a simple elegant & practical algorithm will ever be 
found.

Therefore, I expect the best answer is probably some kind of empirical 
adaptive approach - which I think has already been suggested.


Cheers,
Gavin





В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Reference to parent query from ANY sublink
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: autovacuum_work_mem