Re: PATCH: decreasing memory needlessly consumed by array_agg

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: PATCH: decreasing memory needlessly consumed by array_agg
Дата
Msg-id 533AE938.9070905@fuzzy.cz
обсуждение исходный текст
Ответ на Re: PATCH: decreasing memory needlessly consumed by array_agg  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: PATCH: decreasing memory needlessly consumed by array_agg  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Re: PATCH: decreasing memory needlessly consumed by array_agg  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On 31.3.2014 21:04, Robert Haas wrote:
> On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
>> The patch also does one more thing - it changes how the arrays (in the
>> aggregate state) grow. Originally it worked like this
>>
>>     /* initial size */
>>     astate->alen = 64;
>>
>>     /* when full, grow exponentially */
>>     if (astate->nelems >= astate->alen)
>>         astate->alen *= 2;
>>
>> so the array length would grow like this 64 -> 128 -> 256 -> 512 ...
>> (note we're talking about elements, not bytes, so with with 32-bit
>> integers it's actually 256B -> 512B -> 1024B -> ...).
>>
>> While I do understand the point of this (minimizing palloc overhead), I
>> find this pretty dangerous, especially in case of array_agg(). I've
>> modified the growth like this:
>>
>>     /* initial size */
>>     astate->alen = 4;
>>
>>     /* when full, grow exponentially */
>>     if (astate->nelems >= astate->alen)
>>         astate->alen += 4;
>>
>> I admit that might be a bit too aggressive, and maybe there's a better
>> way to do this - with better balance between safety and speed. I was
>> thinking about something like this:
>>
>>
>>     /* initial size */
>>     astate->alen = 4;
>>
>>     /* when full, grow exponentially */
>>     if (astate->nelems >= astate->alen)
>>         if (astate->alen < 128)
>>             astate->alen *= 2;
>>         else
>>             astate->alen += 128;
>>
>> i.e. initial size with exponential growth, but capped at 128B.
> 
> So I think this kind of thing is very sensible, but the last time I
> suggested something similar, I got told "no":
> 
> http://www.postgresql.org/message-id/CAEYLb_WLGHT7yJLaRE9PPeRt5RKd5ZJbb15tE+kpgejgQKORyA@mail.gmail.com
> 
> But I think you're right and the objections previously raised are 
> wrong. I suspect that the point at which we should stop doubling is 
> higher than 128 elements, because that's only 8kB, which really
> isn't that big - and the idea that the resizing overhead takes only 
> amortized constant time is surely appealing. But I still think that 
> doubling *forever* is a bad idea, here and there. The fact that
> we've written the code that way in lots of places doesn't make it the
> right algorithm.

I've been thinking about it a bit more and maybe the doubling is not
that bad idea, after all. What I'd like to see is a solution that does
"wastes" less than some known fraction of the allocated memory, and
apparently that's what doubling does ...

Let's assume we have many buffers (arrays in array_agg), allocated in
this manner. Let's assume the buffers are independent, i.e. the doubling
is not somehow "synchronized" for the buffers.

Now, at arbitrary time the buffers should be ~75% full on average. There
will be buffers that were just doubled (50% full), buffers that will be
doubled soon (100% full) and buffers somewhere in between. But on
average the buffers should be 75%. That means we're "wasting" 25% memory
on average, which seems quite acceptable to me. We could probably use a
different growth rate (say 1.5x, resulting in 12.5% memory being
"wasted"), but I don't see this as the main problem (and I won't fight
for this part of array_agg patch).

The "current" array_agg however violates some of the assumptions
mentioned above, because it

(1) pre-allocates quite large number of items (64) at the beginning,   resulting in ~98% of memory being "wasted"
initially

(2) allocates one memory context per group, with 8kB initial size, so   you're actually wasting ~99.999% of the memory

(3) thanks to the dedicated memory contexts, the doubling is pretty   much pointless up until you cross the 8kB
boundary

IMNSHO these are the issues we really should fix - by lowering the
initial element count (64->4) and using a single memory context.

regards
Tomas



В списке pgsql-hackers по дате отправления:

Предыдущее
От: steve k
Дата:
Сообщение: Re: PQputCopyData dont signal error
Следующее
От: Fabrízio de Royes Mello
Дата:
Сообщение: Re: GSoC proposal - "make an unlogged table logged"