Re: [HACKERS] Block level parallel vacuum

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: [HACKERS] Block level parallel vacuum
Дата
Msg-id CA+fd4k6qemDkg84tp581=WK8_5Gff81YCOCxADEXZ-8QkabuJg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: [HACKERS] Block level parallel vacuum  (Mahendra Singh <mahi6run@gmail.com>)
Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Fri, 13 Dec 2019 at 15:50, Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Dec 13, 2019 at 11:08 AM Masahiko Sawada
> <masahiko.sawada@2ndquadrant.com> wrote:
> >
> > On Fri, 13 Dec 2019 at 14:19, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > > > >
> > > > > > How about adding an additional argument to ReinitializeParallelDSM()
> > > > > > that allows the number of workers to be reduced? That seems like it
> > > > > > would be less confusing than what you have now, and would involve
> > > > > > modify code in a lot fewer places.
> > > > > >
> > > > >
> > > > > Yeah, we can do that.  We can maintain some information in
> > > > > LVParallelState which indicates whether we need to reinitialize the
> > > > > DSM before launching workers.  Sawada-San, do you see any problem with
> > > > > this idea?
> > > >
> > > > I think the number of workers could be increased in cleanup phase. For
> > > > example, if we have 1 brin index and 2 gin indexes then in bulkdelete
> > > > phase we need only 1 worker but in cleanup we need 2 workers.
> > > >
> > >
> > > I think it shouldn't be more than the number with which we have
> > > created a parallel context, no?  If that is the case, then I think it
> > > should be fine.
> >
> > Right. I thought that ReinitializeParallelDSM() with an additional
> > argument would reduce DSM but I understand that it doesn't actually
> > reduce DSM but just have a variable for the number of workers to
> > launch, is that right?
> >
>
> Yeah, probably, we need to change the nworkers stored in the context
> and it should be lesser than the value already stored in that number.
>
> > And we also would need to call
> > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> > cleanup since we don't know that we will do either index vacuum or
> > index cleanup, at the end of index vacum.
> >
>
> Right.

I've attached the latest version patch set. These patches requires the
gist vacuum patch[1]. The patch incorporated the review comments. In
current version patch only indexes that support parallel vacuum and
whose size is larger than min_parallel_index_scan_size can participate
parallel vacuum. I'm still not unclear to me that using
min_parallel_index_scan_size is the best approach but I agreed to set
a lower bound of relation size. I separated the patch for
PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION  from the main patch and
I'm working on that patch.

Please review it.

[1] https://www.postgresql.org/message-id/CAA4eK1J1RxmXFAHC34S4_BznT76cfbrvqORSk23iBgRAOj1azw%40mail.gmail.com

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: vignesh C
Дата:
Сообщение: Re: segmentation fault when cassert enabled
Следующее
От: Julien Rouhaud
Дата:
Сообщение: Re: Collation versioning