Re: Limiting memory allocation

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Limiting memory allocation
Дата
Msg-id CA+TgmoaDznFngd7_2_5KeYeNLWgQmBnRh93vsmUTu72pB8NP6g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Limiting memory allocation  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Ответы Re: Limiting memory allocation  (Bruce Momjian <bruce@momjian.us>)
Список pgsql-hackers
On Fri, May 20, 2022 at 7:09 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
> I wonder if we might eventually use this to define memory budgets. One
> of the common questions I get is how do you restrict the user from
> setting work_mem too high or doing too much memory-hungry things.
> Currently there's no way to do that, because we have no way to limit
> work_mem values, and even if we had the user could construct a more
> complex query with more memory-hungry operations.
>
> But I think it's also that we weren't sure what to do after hitting a
> limit - should we try replanning the query with lower work_mem value, or
> what?

It's always seemed to me that the principled thing to do would be to
make work_mem a per-query budget rather than a per-node budget, and
have add_path() treat memory usage as an independent figure of merit
-- and also discard any paths that went over the memory budget. Thus
we might keep more expensive paths if they use less memory to produce
the result. For this to work well, memory-hungry nodes would probably
need to add multiple paths - especially nodes that do hashing, which
is likely to have breakpoints where the estimated cost changes sharply
(and the actual cost does too, if the row counts are accurate).

I've also wondered whether we could maybe do something unprincipled
instead, because that all sounds not only complicated but also
potentially expensive, if it results in us keeping extra paths around
compared to what we keep today. It might be worth it, though.
Generating query plans infinitely fast is no good if the plans suck,
and running the machine out of RAM definitely counts as sucking.

My general feeling about this topic is that, in cases where PostgreSQL
today uses more memory than is desirable, it's probably only
moderately difficult to make it fail with a nice error message
instead. Making it succeed by altering its behavior to use less memory
seems likely to be a lot harder -- which is not to say that we
shouldn't try to do it. It's an important problem. Just not an easy
one.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Pantelis Theodosiou
Дата:
Сообщение: Re: Reference column alias for common expressions
Следующее
От: Tom Lane
Дата:
Сообщение: Re: PG15 beta1 sort performance regression due to Generation context change