Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Дата
Msg-id 20130706165424.GD3286@tamriel.snowman.net
обсуждение исходный текст
Ответ на Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-hackers
Jeff,

* Jeff Janes (jeff.janes@gmail.com) wrote:
> I was going to add another item to make nodeHash.c use the new huge
> allocator, but after looking at it just now it was not clear to me that it
> even has such a limitation.  nbatch is limited by MaxAllocSize, but
> nbuckets doesn't seem to be.

nodeHash.c:ExecHashTableCreate() allocates ->buckets using:

palloc(nbuckets * sizeof(HashJoinTuple))

(where HashJoinTuple is actually just a pointer), and reallocates same
in ExecHashTableReset().  That limits the current implementation to only
about 134M buckets, no?

Now, what I was really suggesting wasn't so much changing those specific
calls; my point was really that there's a ton of stuff in the HashJoin
code that uses 32bit integers for things which, these days, might be too
small (nbuckets being one example, imv).  There's a lot of code there
though and you'd have to really consider which things make sense to have
as int64's.
Thanks,
    Stephen

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tomas Vondra
Дата:
Сообщение: Re: GIN improvements part2: fast scan
Следующее
От: Tomas Vondra
Дата:
Сообщение: Re: GIN improvements part 3: ordering in index