Re: Make MemoryContextMemAllocated() more precise

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: Make MemoryContextMemAllocated() more precise
Дата
Msg-id 20200319181131.vw7kufl22u24tplw@development
обсуждение исходный текст
Ответ на Re: Make MemoryContextMemAllocated() more precise  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Make MemoryContextMemAllocated() more precise  (Jeff Davis <pgsql@j-davis.com>)
Re: Make MemoryContextMemAllocated() more precise  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Thu, Mar 19, 2020 at 11:44:05AM -0400, Robert Haas wrote:
>On Mon, Mar 16, 2020 at 2:45 PM Jeff Davis <pgsql@j-davis.com> wrote:
>> Attached is a patch that makes mem_allocated a method (rather than a
>> field) of MemoryContext, and allows each memory context type to track
>> the memory its own way. They all do the same thing as before
>> (increment/decrement a field), but AllocSet also subtracts out the free
>> space in the current block. For Slab and Generation, we could do
>> something similar, but it's not as much of a problem because there's no
>> doubling of the allocation size.
>>
>> Although I think this still matches the word "allocation" in spirit,
>> it's not technically correct, so feel free to suggest a new name for
>> MemoryContextMemAllocated().
>
>Procedurally, I think that it is highly inappropriate to submit a
>patch two weeks after the start of the final CommitFest and then
>commit it just over 48 hours later without a single endorsement of the
>change from anyone.
>

True.

>Substantively, I think that whether or not this is improvement depends
>considerably on how your OS handles overcommit. I do not have enough
>knowledge to know whether it will be better in general, but would
>welcome opinions from others.
>

Not sure overcommit is a major factor, and if it is then maybe it's the
strategy of doubling block size that's causing problems.

AFAICS the 2x allocation is the worst case, because it only happens
right after allocating a new block (of twice the size), when the
"utilization" drops from 100% to 50%. But in practice the utilization
will be somewhere in between, with an average of 75%. And we're not
doubling the block size indefinitely - there's an upper limit, so over
time the utilization drops less and less. So as the contexts grow, the
discrepancy disappears. And I'd argue the smaller the context, the less
of an issue the overcommit behavior is.

My understanding is that this is really just an accounting issue, where
allocating a block would get us over the limit, which I suppose might be
an issue with low work_mem values.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: GSoC applicant proposal, Uday PB
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: Adding missing object access hook invocations