Re: BUG #16104: Invalid DSA Memory Alloc Request in Parallel Hash
От | Tomas Vondra |
---|---|
Тема | Re: BUG #16104: Invalid DSA Memory Alloc Request in Parallel Hash |
Дата | |
Msg-id | 20191111151625.3kdtri34xce4t5y4@development обсуждение исходный текст |
Ответ на | Re: BUG #16104: Invalid DSA Memory Alloc Request in Parallel Hash (James Coleman <jtc331@gmail.com>) |
Список | pgsql-bugs |
On Mon, Nov 11, 2019 at 09:14:43AM -0500, James Coleman wrote: >On Sun, Nov 10, 2019 at 4:09 PM Thomas Munro <thomas.munro@gmail.com> wrote: >> >> I think I see what's happening: we're running out of hash bits. >> >> > Buckets: 4194304 (originally 4194304) Batches: 32768 (originally 4096) Memory Usage: 344448kB >> >> Here it's using the lower 22 bits for the bucket number, and started >> out using 12 bits for the batch (!), and increased that until it got >> to 15 (!!). After using 22 bits for the bucket, there are only 10 >> bits left, so all the tuples go into the lower 1024 batches. > >Do we have this kind of problem with hash aggregates also? I've >noticed the temp disk usage pattern applies to both, and the buffers >stats shows that being the case, but unfortunately the hash aggregate >node doesn't report memory usage for its hash or buckets info. Given >it's not a join, maybe we only need buckets and not batches, but I >don't know this part of the code at all, so I'm just guessing to >assume either way. > I don't think so, because hash aggregate does not spill to disk. The trouble with hashjoins is that we need two independent indexes - bucket and batch, and we only have a single 32-bit hash value. The hash agg is currently unable to spill to disk, so it's not affected by this (but it does have plenty of issues on it's own). There's work in progress aiming to add memory-bounded hash aggregate, but I think the spilling is supposed to work very differently there. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-bugs по дате отправления: