Re: [HACKERS] Block level parallel vacuum

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: [HACKERS] Block level parallel vacuum
Дата
Msg-id CA+fd4k6gnUnmuLDW9zz93d3UWxnjOJ0ru4Gfs_YX1kfLap54=w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Wed, 18 Dec 2019 at 15:03, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Tue, Dec 17, 2019 at 6:07 PM Masahiko Sawada
> <masahiko.sawada@2ndquadrant.com> wrote:
> >
> > On Fri, 13 Dec 2019 at 15:50, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > > > I think it shouldn't be more than the number with which we have
> > > > > created a parallel context, no?  If that is the case, then I think it
> > > > > should be fine.
> > > >
> > > > Right. I thought that ReinitializeParallelDSM() with an additional
> > > > argument would reduce DSM but I understand that it doesn't actually
> > > > reduce DSM but just have a variable for the number of workers to
> > > > launch, is that right?
> > > >
> > >
> > > Yeah, probably, we need to change the nworkers stored in the context
> > > and it should be lesser than the value already stored in that number.
> > >
> > > > And we also would need to call
> > > > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> > > > cleanup since we don't know that we will do either index vacuum or
> > > > index cleanup, at the end of index vacum.
> > > >
> > >
> > > Right.
> >
> > I've attached the latest version patch set. These patches requires the
> > gist vacuum patch[1]. The patch incorporated the review comments.
> >
>
> I was analyzing your changes related to ReinitializeParallelDSM() and
> it seems like we might launch more number of workers for the
> bulkdelete phase.   While creating a parallel context, we used the
> maximum of "workers required for bulkdelete phase" and "workers
> required for cleanup", but now if the number of workers required in
> bulkdelete phase is lesser than a cleanup phase(as mentioned by you in
> one example), then we would launch more workers for bulkdelete phase.

Good catch. Currently when creating a parallel context the number of
workers passed to CreateParallelContext() is set not only to
pcxt->nworkers but also pcxt->nworkers_to_launch. We would need to
specify the number of workers actually to launch after created the
parallel context or when creating it. Or I think we call
ReinitializeParallelDSM() even the first time running index vacuum.

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Amit Langote
Дата:
Сообщение: Re: unsupportable composite type partition keys
Следующее
От: Masahiko Sawada
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum