Re: Default setting for enable_hashagg_disk

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Default setting for enable_hashagg_disk
Дата
Msg-id CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Default setting for enable_hashagg_disk  (Jeff Davis <pgsql@j-davis.com>)
Ответы Re: Default setting for enable_hashagg_disk  (Justin Pryzby <pryzby@telsasoft.com>)
Re: Default setting for enable_hashagg_disk  (Bruce Momjian <bruce@momjian.us>)
Re: Default setting for enable_hashagg_disk  (Andres Freund <andres@anarazel.de>)
Список pgsql-hackers
On Tue, 23 Jun 2020 at 08:24, Jeff Davis <pgsql@j-davis.com> wrote:
> Another way of looking at it is that the weird behavior is already
> there in v12, so there are already users relying on this weird behavior
> as a crutch for some other planner mistake. The question is whether we
> want to:
>
> (a) take the weird behavior away now as a consequence of implementing
> disk-based HashAgg; or
> (b) support the weird behavior forever; or
> (c) introduce a GUC now to help transition away from the weird behavior
>
> The danger with (c) is that it gives users more time to become more
> reliant on the weird behavior; and worse, a GUC could be seen as an
> endorsement of the weird behavior rather than a path to eliminating it.
> So we could intend to do (c) and end up with (b). We can mitigate this
> with documentation warnings, perhaps.

So, I have a few thoughts on this subject. I understand both problem
cases have been mentioned before on this thread, but just to reiterate
the two problem cases that we really would rather people didn't hit.

They are:

1. Statistics underestimation can cause hashagg to be selected. The
executor will spill to disk in PG13.  Users may find performance
suffers as previously the query may have just overshot work_mem
without causing any OOM issues. Their I/O performance might be
terrible.
2. We might now choose to hash aggregate where pre PG13, we didn't
choose that because the hash table was estimated to be bigger than
work_mem. Hash agg might not be the best plan for the job.

For #1. We know users are forced to run smaller work_mems than they
might like as they need to allow for that random moment where all
backends happen to be doing that 5-way hash join all at the same time.
It seems reasonable that someone might want the old behaviour. They
may well be sitting on a timebomb that's about to OOM, but it would be
sad if someone's upgrade to PG13 was blocked on this, especially if
it's just due to some query that runs once per month but needs to
perform quickly.

For #2. This seems like a very legitimate requirement to me.  If a
user is unhappy that PG13 now hashaggs where before it sorted and
group aggregated, but they're unhappy, not because there's some issue
with hashagg spilling, but because that causes the node above the agg
to becomes a Hash Join rather than a Merge Join and that's bad for
some existing reason. Our planner doing the wrong thing based on
either; lack of, inaccurate or out-of-date statistics is not Jeff's
fault. Having the ability to switch off a certain planner feature is
just following along with what we do today for many other node types.

As for GUCs to try to help the group of users who, *I'm certain*, will
have problems with PG13's plan choice. I think the overloaded
enable_hashagg option is a really nice compromise.   We don't really
have any other executor node type that has multiple GUCs controlling
its behaviour, so I believe it would be nice to keep it that way.

How about:

enable_hashagg = "on" -- enables hashagg allowing it to freely spill
to disk as it pleases.
enable_hashagg = "trynospill" -- Planner will only choose hash_agg if
it thinks it won't spill (pre PG13 planner behaviour)
enable_hashagg = "neverspill" -- executor will *never* spill to disk
and can still OOM (NOT RECOMMENDED, but does give pre PG13 planner and
executor behaviour)
enable_hashagg = "off" -- planner does not consider hash agg, ever.
Same as what PG12 did for this setting.

Now, it's a bit weird to have "neverspill" as this is controlling
what's done in the executor from a planner GUC.  Likely we can just
work around that by having a new "allowhashspill" bool field in the
"Agg" struct that's set by the planner, say during createplan that
controls if nodeAgg.c is allowed to spill or not.  That'll also allow
PREPAREd plans to continue to do what they had planned to do already.

The thing I like about doing it this way is that:

a) it does not add any new GUCs
b) it semi hides the weird values that we really wish nobody would
ever have to set in a GUC that people have become used it just
allowing the values "on" and "off".

The thing I don't quite like about this idea is:
a) I wish the planner was perfect and we didn't need to do this.
b) It's a bit weird to overload a GUC that has a very booleanish name
to not be bool.

However, I also think it's pretty lightweight to support this. I
imagine a dozen lines of docs and likely about half a dozen lines per
GUC option in the planner.

And in the future, when our planner is perfect*, we can easily just
remove the enum values from the GUC that we no longer want to support.

David

* Yes I know that will never happen.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: Removal of currtid()/currtid2() and some table AM cleanup
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Threading in BGWorkers (!)