Re: Default setting for enable_hashagg_disk

Поиск
Список
Период
Сортировка
От David G. Johnston
Тема Re: Default setting for enable_hashagg_disk
Дата
Msg-id CAKFQuwY8dVgkMgQb4BCEqrp8CVZx6p1AjFB773XNqrQnSHAvuw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Default setting for enable_hashagg_disk  (Stephen Frost <sfrost@snowman.net>)
Ответы Re: Default setting for enable_hashagg_disk  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Список pgsql-hackers
On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:
There now seems to be some suggestions that not only should we have a
new GUC, but we should default to having it not be equal to work_mem (or
1.0 or whatever) and instead by higher, to be *twice* or larger whatever
the existing work_mem setting is- meaning that people whose systems are
working just fine and have good estimates that represent their workload
and who get the plans they want may then start seeing differences and
increased memory utilization in places that they *don't* want that, all
because we're scared that someone, somewhere, might see a regression due
to HashAgg spilling to disk.

If that increased memory footprint allows the planner to give me a better plan with faster execution and with no OOM I'd be very happy that this change happened. While having a more flexible memory allocation framework is not a primary goal in and of itself it is a nice side-effect.  I'm not going to say "let's only set work_mem to 32MB instead of 48MB so I can avoid this faster HashAgg node and instead execute a nested loop (or whatever)".  More probable is the user whose current nested loop plan is fast enough and doesn't even realize that with a bit more memory they could get an HashAgg that performs 15% faster.  For them this is a win on its face.

I don't believe this negatively impacts the super-admin in our user-base and is a decent win for the average and below average admin.

Do we really have an issue with plans being chosen while having access to more memory being slower than plans chosen while having less memory?

The main risk here is that we choose for a user to consume more memory than they expected and they report OOM issues to us.  We tell them to set this new GUC to 1.0.  But that implies they are getting many non-HashAgg plans produced when with a bit more memory those HashAgg plans would have been chosen.  If they get those faster plans without OOM it's a win, if it OOMs it's a loss.  I'm feeling optimistic here and we'll get considerably more wins than losses.  How loss-averse do we need to be here though?  Npte we can give the upgrading user advance notice of our loss-aversion level and they can simply disagree and set it to 1.0 and/or perform more thorough testing.  So being optimistic feels like the right choice.

David J.

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: min_safe_lsn column in pg_replication_slots view
Следующее
От: Dmitry Dolgov
Дата:
Сообщение: Re: Index Skip Scan (new UniqueKeys)