Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Дата
Msg-id 20130622131231.GF7093@tamriel.snowman.net
обсуждение исходный текст
Ответ на Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize  (Simon Riggs <simon@2ndQuadrant.com>)
Список pgsql-hackers
* Simon Riggs (simon@2ndQuadrant.com) wrote:
> On 22 June 2013 08:46, Stephen Frost <sfrost@snowman.net> wrote:
> >>The next limit faced by sorts is
> >> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> >> 150 GiB when sorting int4.
> >
> > That's frustratingly small. :(
>
> But that has nothing to do with this patch, right? And is easily fixed, yes?

I don't know about 'easily fixed' (consider supporting a HashJoin of >2B
records) but I do agree that dealing with places in the code where we are
using an int4 to keep track of the number of objects in memory is outside
the scope of this patch.

Hopefully we are properly range-checking and limiting ourselves to only
what a given node can support and not solely depending on MaxAllocSize
to keep us from overflowing some int4 which we're using as an index for
an array or as a count of how many objects we've currently got in
memory, but we'll want to consider carefully what happens with such
large sets as we're adding support into nodes for these Huge
allocations (along with the recent change to allow 1TB work_mem, which
may encourage users with systems large enough to actually try to set it
that high... :)
Thanks,
    Stephen

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: Support for REINDEX CONCURRENTLY
Следующее
От: Simon Riggs
Дата:
Сообщение: A better way than tweaking NTUP_PER_BUCKET