Re: Spilling hashed SetOps and aggregates to disk
| От | Jeff Davis |
|---|---|
| Тема | Re: Spilling hashed SetOps and aggregates to disk |
| Дата | |
| Msg-id | 1528740825.8818.52.camel@j-davis.com обсуждение исходный текст |
| Ответ на | Re: Spilling hashed SetOps and aggregates to disk (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
| Ответы |
Re: Spilling hashed SetOps and aggregates to disk
|
| Список | pgsql-hackers |
On Mon, 2018-06-11 at 19:33 +0200, Tomas Vondra wrote:
> For example we hit the work_mem limit after processing 10% tuples,
> switching to sort would mean spill+sort of 900GB of data. Or we
> might
> say - hmm, we're 10% through, so we expect hitting the limit 10x, so
> let's spill the hash table and then do sort on that, writing and
> sorting
> only 10GB of data. (Or merging it in some hash-based way, per
> Robert's
> earlier message.)
Your example depends on large groups and a high degree of group
clustering. That's fine, but it's a special case, and complexity does
have a cost, too.
Regards,
Jeff Davis
В списке pgsql-hackers по дате отправления: