Re: Default setting for enable_hashagg_disk (hash_mem)

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: Default setting for enable_hashagg_disk (hash_mem)
Дата
Msg-id CAH2-WzmJA3kxAcUaSL-Vsi_mmsatvL1=Lbi8Mk4PUYVT5xnwCA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Default setting for enable_hashagg_disk (hash_mem)  (Jeff Davis <pgsql@j-davis.com>)
Ответы Re: Default setting for enable_hashagg_disk (hash_mem)  (Jeff Davis <pgsql@j-davis.com>)
Список pgsql-hackers
On Sat, Jul 4, 2020 at 1:54 PM Jeff Davis <pgsql@j-davis.com> wrote:
> I agree that it's good to wait for actual problems. But the challenge
> is that we can't backport an added GUC. Are there other, backportable
> changes we could potentially make if a lot of users have a problem with
> v13 after release?

I doubt that there are.

> I'm OK not having a GUC, but we need consensus around what our response
> will be if a user experiences a regression. If our only answer is
> "tweak X, Y, and Z; and if that doesn't work, wait for v14" then I'd
> like almost everyone to be on board with that.

I'm practically certain that there will be users that complain about
regressions. It's all but inevitable given that in general grouping
estimates are often wrong by orders of magnitude.

> Without some backportable potential solutions, I'm inclined to ship
> with either one or two escape-hatch GUCs, with warnings that they
> should be used as a last resort. Hopefully users will complain on the
> lists (so we understand the problem) before setting them.

Where does that leave the hash_mem idea (or some other similar proposal)?

I think that we should offer something like hash_mem that can work as
a multiple of work_mem, for the reason that Justin mentioned recently.
This can be justified as something that more or less maintains some
kind of continuity with the old design.

I think that it should affect hash join too, though I suppose that
that part might be controversial -- that is certainly more than an
escape hatch for this particular problem. Any thoughts on that?

> It's not very principled, and we may be stuck with some cruft, but it
> mitigates the risk a lot. There's a good chance we can remove them
> later, especially if it's part of a larger overhall of
> work_mem/hash_mem (which might happen fairly soon, given the interest
> in this thread), or if we change something about HashAgg that makes the
> GUCs harder to maintain.

There are several reasons to get rid of work_mem entirely in the
medium to long term. Some relatively obvious, others less so.

An example in the latter category is "hash teams" [1]: a design that
teaches multiple hash operations (e.g. a hash join and a hash
aggregate that hash on the same columns) to cooperate in processing
their inputs. It's more or less the hashing equivalent of what are
sometimes called "interesting sort orders" (e.g. cases where the same
sort/sort order is used by both a merge join and a group aggregate).
The hash team controls spilling behavior for related hash nodes as a
whole. That's the most sensible way of thinking about the related hash
nodes, to enable a slew of optimizations. For example, I think that it
enables bushy plans with multiple hash joins that can have much lower
high watermark memory consumption.

This hash teams business seems quite important in general, but it is
fundamentally incompatible with the work_mem model, which supposes
that each node exists on its own in a vacuum. (I suspect you already
knew about this, Jeff, but not everyone will.)

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.3183&rep=rep1&type=pdf
-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Can I use extern "C" in an extension so I can use C++?
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: Postgres Windows build system doesn't work with python installed in Program Files