Re: [HACKERS] Block level parallel vacuum

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: [HACKERS] Block level parallel vacuum
Дата
Msg-id CAA4eK1+PCOLhYLO995vRYj9GE-4i0cRk4VWG_OmNvXZvZE8H0Q@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Block level parallel vacuum  (Masahiko Sawada <masahiko.sawada@2ndquadrant.com>)
Ответы Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Wed, Dec 18, 2019 at 11:46 AM Masahiko Sawada
<masahiko.sawada@2ndquadrant.com> wrote:
>
> On Wed, 18 Dec 2019 at 15:03, Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > I was analyzing your changes related to ReinitializeParallelDSM() and
> > it seems like we might launch more number of workers for the
> > bulkdelete phase.   While creating a parallel context, we used the
> > maximum of "workers required for bulkdelete phase" and "workers
> > required for cleanup", but now if the number of workers required in
> > bulkdelete phase is lesser than a cleanup phase(as mentioned by you in
> > one example), then we would launch more workers for bulkdelete phase.
>
> Good catch. Currently when creating a parallel context the number of
> workers passed to CreateParallelContext() is set not only to
> pcxt->nworkers but also pcxt->nworkers_to_launch. We would need to
> specify the number of workers actually to launch after created the
> parallel context or when creating it. Or I think we call
> ReinitializeParallelDSM() even the first time running index vacuum.
>

How about just having ReinitializeParallelWorkers which can be called
only via vacuum even for the first time before the launch of workers
as of now?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Mahendra Singh
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum