Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Дата
Msg-id CAMkU=1y8ZBMMapk5i1BgsMHQZsaxDCO=UEKWnu6J=XEjQ-gpAw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Stephen Frost <sfrost@snowman.net>)
Ответы Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Stephen Frost <sfrost@snowman.net>)
Список pgsql-hackers
On Sat, Jun 22, 2013 at 12:46 AM, Stephen Frost <sfrost@snowman.net> wrote:
Noah,

* Noah Misch (noah@leadboat.com) wrote:
> This patch introduces MemoryContextAllocHuge() and repalloc_huge() that check
> a higher MaxAllocHugeSize limit of SIZE_MAX/2.

Nice!  I've complained about this limit a few different times and just
never got around to addressing it.

> This was made easier by tuplesort growth algorithm improvements in commit
> 8ae35e91807508872cabd3b0e8db35fc78e194ac.  The problem has come up before
> (TODO item "Allow sorts to use more available memory"), and Tom floated the
> idea[1] behind the approach I've used.  The next limit faced by sorts is
> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> 150 GiB when sorting int4.

That's frustratingly small. :(

I've added a ToDo item to remove that limit from sorts as well.

I was going to add another item to make nodeHash.c use the new huge allocator, but after looking at it just now it was not clear to me that it even has such a limitation.  nbatch is limited by MaxAllocSize, but nbuckets doesn't seem to be.

Cheers,

Jeff

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dean Rasheed
Дата:
Сообщение: Re: MD5 aggregate
Следующее
От: Claudio Freire
Дата:
Сообщение: Re: Hash partitioning.