On Fri, 2020-07-24 at 10:40 +0200, Tomas Vondra wrote:
> FWIW one more suspicious thing that I forgot to mention is the
> behavior
> of the "planned partitions" depending on work_mem, which looks like
> this:
>
> 2MB Planned Partitions: 64 HashAgg Batches: 4160
> 4MB Planned Partitions: 128 HashAgg Batches: 16512
> 8MB Planned Partitions: 256 HashAgg Batches: 21488
> 64MB Planned Partitions: 32 HashAgg Batches: 2720
> 256MB Planned Partitions: 8 HashAgg Batches: 8
>
> I'd expect the number of planned partitions to decrease (slowly) as
> work_mem increases, but it seems to increase initially. Seems a bit
> strange, but maybe it's expected.
The space for open-partition buffers is also limited to about 25% of
memory. Each open partition takes BLCKSZ memory, so those numbers are
exactly what I'd expect (64*8192 = 512kB).
There's also another effect at work that can cause the total number of
batches to be higher for larger work_mem values: when we do recurse, we
again need to estimate the number of partitions needed. Right now, we
overestimate the number of partitions needed (to be conservative),
which leads to a wider fan-out and lots of tiny partitions, and
therefore more batches.
I think we can improve this by using something like a HyperLogLog on
the hash values of the spilled tuples to get a better estimate for the
number of groups (and therefore the number of partitions) that we need
when we recurse, which would reduce the number of overall batches at
higher work_mem settings. But I didn't get a chance to implement that
yet.
Regards,
Jeff Davis