Re: Large Scale Aggregation (HashAgg Enhancement)

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Re: Large Scale Aggregation (HashAgg Enhancement)
Дата
Msg-id 1137481529.3180.254.camel@localhost.localdomain
обсуждение исходный текст
Ответ на Re: Large Scale Aggregation (HashAgg Enhancement)  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Large Scale Aggregation (HashAgg Enhancement)  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Mon, 2006-01-16 at 20:02 -0500, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > Sure hash table is dynamic, but we read all inner rows to create the
> > hash table (nodeHash) before we get the outer rows (nodeHJ).
> 
> But our idea of the number of batches needed can change during that
> process, resulting in some inner tuples being initially assigned to the
> wrong temp file.  This would also be true for hashagg.

So we correct that before we start reading the outer table.

> > Why would we continue to dynamically build the hash table after the
> > start of the outer scan?
> 
> The number of tuples written to a temp file might exceed what we want to
> hold in memory; we won't detect this until the batch is read back in,
> and in that case we have to split the batch at that time.  For hashagg
> this point would apply to the aggregate states not the input tuples, but
> it's still a live problem (especially if the aggregate states aren't
> fixed-size values ... consider a "concat" aggregate for instance).

OK, I see what you mean. Sounds like we should have a new definition for
Aggregates, "Sort Insensitive" that allows them to work when the input
ordering does not effect the result, since that case can be optimised
much better when using HashAgg. Since we know that applies to the common
cases of SUM, AVG etc this will certainly help people.

For sort-sensitive aggregates sounds like we either:
1. Write to a single file, while we remember the start offset of the
first row of each batch.
2. Write to multiple files, adding a globally incrementing sequenceid.
Batches are then resorted on the sequenceid before processing.
3. We give up, delete the existing batches and restart the scan from the
beginning of the outer table.

Sounds like (1) is best, since the overflow just becomes a SortedAgg.
But all of them sound ugly.

Best Regards, Simon Riggs



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Large Scale Aggregation (HashAgg Enhancement)
Следующее
От: "R, Rajesh (STSD)"
Дата:
Сообщение: Re: [GENERAL] [PATCH] Better way to check for getaddrinfo function.