Re: Odd out of memory problem.

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Odd out of memory problem.
Дата
Msg-id 20576.1332807101@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Odd out of memory problem.  (Greg Stark <stark@mit.edu>)
Ответы Re: Odd out of memory problem.
Список pgsql-hackers
Greg Stark <stark@mit.edu> writes:
> On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Could you give us a brain dump on the sketch? �I've never seen how to
>> do it without unreasonable overhead.

> Hm. So my original plan was dependent on adding the state-merge
> function we've talked about in the past. Not all aggregate functions
> necessarily can support such a function but I think all or nearly all
> the builtin aggregates can. Certainly min,max, count, sum, avg,
> stddev, array_agg can which are most of what people do. That would be
> a function which can take two state variables and produce a new state
> variable.

I'd rather not invent new requirements for aggregate implementations
if we can avoid it.

> However now that I've started thinking about it further I think you
> could solve it with less complexity by cheating in various ways. For
> example if you limit the hash size to 1/2 of work_mem then you when
> you reach that limit you could just stuff any tuple that doesn't match
> a hash entry into a tuplesort with 1/2 of work_mem and do the regular
> level break logic on the output of that.

Or just start dumping such tuples into a tuplestore, while continuing to
process tuples that match the hashagg entries that are already in
existence.  Once the input is exhausted, read out the hashagg entries we
have, flush the hashagg table, start reading from the tuplestore.
Repeat as needed.

I like this idea because the only thing you give up is predictability of
the order of output of aggregated entries, which is something that a
hashagg isn't guaranteeing anyway.  In particular, we would still have a
guarantee that any one aggregate evaluation processes the matching
tuples in arrival order, which is critical for some aggregates.

The main problem I can see is that if we start to flush after work_mem
is X% full, we're essentially hoping that the state values for the
existing aggregates won't grow by more than 1-X%, which is safe for many
common aggregates but fails for some like array_agg().  Ultimately, for
ones like that, it'd probably be best to never consider hashing at all.
I guess we could invent an "unsafe for hash aggregation" flag for
aggregates that have unbounded state-size requirements.
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Cross-backend signals and administration (Was: Re: pg_terminate_backend for same-role)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Cross-backend signals and administration (Was: Re: pg_terminate_backend for same-role)