Re: Duplicate Workers entries in some EXPLAIN plans

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Duplicate Workers entries in some EXPLAIN plans
Дата
Msg-id 18781.1580079621@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Duplicate Workers entries in some EXPLAIN plans  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
I wrote:
> Andres Freund <andres@anarazel.de> writes:
>> I wonder if we could introduce a debug GUC that makes parallel worker
>> acquisition just retry in a loop, for a time determined by the GUC. That
>> obviously would be a bad idea to do in a production setup, but it could
>> be good enough for regression tests?  There are some deadlock dangers,
>> but I'm not sure they really matter for the tests.

> Hmmm .... might work.  Seems like a better idea than "run it by itself"
> as we have to do now.

The more I think about this, the more it seems like a good idea, and
not only for regression test purposes.  If you're about to launch a
query that will run for hours even with the max number of workers,
you don't want it to launch with less than that number just because
somebody else was eating a worker slot for a few milliseconds.

So I'm imagining a somewhat general-purpose GUC defined like
"max_delay_to_acquire_parallel_worker", measured say in milliseconds.
The default would be zero (current behavior: try once and give up),
but you could set it to small positive values if you have that kind
of production concern, while the regression tests could set it to big
positive values.  This would alleviate all sorts of problems we have
with not being able to assume stable results from parallel worker
acquisition in the tests.

            regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: EXPLAIN's handling of output-a-field-or-not decisions
Следующее
От: Thomas Munro
Дата:
Сообщение: Re: Parallel leader process info in EXPLAIN