Re: A better way than tweaking NTUP_PER_BUCKET

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: A better way than tweaking NTUP_PER_BUCKET
Дата
Msg-id CA+Tgmoa5Z4Rv6rrm_tPQsUUTSB-5CKoy=JHf_d46gos=S6vZ=A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: A better way than tweaking NTUP_PER_BUCKET  (Stephen Frost <sfrost@snowman.net>)
Список pgsql-hackers
On Sat, Jun 22, 2013 at 9:48 AM, Stephen Frost <sfrost@snowman.net> wrote:
>> The correct calculation that would match the objective set out in the
>> comment would be
>>
>>  dbuckets = (hash_table_bytes / tupsize) / NTUP_PER_BUCKET;
>
> This looks to be driving the size of the hash table size off of "how
> many of this size tuple can I fit into memory?" and ignoring how many
> actual rows we have to hash.  Consider a work_mem of 1GB with a small
> number of rows to actually hash- say 50.  With a tupsize of 8 bytes,
> we'd be creating a hash table sized for some 13M buckets.

This is a fair point, but I still think Simon's got a good point, too.Letting the number of buckets ramp up when
there'sample memory seems
 
like a broadly sensible strategy.  We might need to put a floor on the
effective load factor, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: A better way than tweaking NTUP_PER_BUCKET
Следующее
От: Robert Haas
Дата:
Сообщение: Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize