Re: [HACKERS] Block level parallel vacuum

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: [HACKERS] Block level parallel vacuum
Дата
Msg-id CAD21AoCbbqxmDKMU6bVQBuKs-mznJWr8nLyYzmfRUntMkxP4Ng@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: [HACKERS] Block level parallel vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Wed, Oct 16, 2019 at 3:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Wed, Oct 16, 2019 at 6:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Tue, Oct 15, 2019 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> >
> > Attached updated patch set. 0001 patch introduces new index AM field
> > amcanparallelvacuum. All index AMs except for gist sets true for now.
> > 0002 patch incorporated the all comments I got so far.
> >
>
> I haven't studied the latest patch in detail, but it seems you are
> still assuming that all indexes will have the same amount of shared
> memory for index stats and copying it in the same way.

Yeah I thought we agreed at least to have canparallelvacuum and if an
index AM cannot support parallel index vacuuming like gist, it returns
false.

> I thought we
> agreed that each index AM should do this on its own.  The basic
> problem is as of now we see this problem only with the Gist index, but
> some other index AM's could also have a similar problem.

Okay. I'm thinking we're going to have a new callback to ack index AMs
the size of the structure using within both ambulkdelete and
amvacuumcleanup. But copying it to DSM can be done by the core because
it knows how many bytes need to be copied to DSM. Is that okay?

>
> Another major problem with previous and this patch version is that the
> cost-based vacuum concept seems to be entirely broken.  Basically,
> each parallel vacuum worker operates independently w.r.t vacuum delay
> and cost.  Assume that the overall I/O allowed for vacuum operation is
> X after which it will sleep for some time, reset the balance and
> continue.  In the patch, each worker will be allowed to perform X
> before which it can sleep and also there is no coordination for the
> same with master backend.  This is somewhat similar to memory usage
> problem, but a bit more tricky because here we can't easily split the
> I/O for each of the worker.
>
> One idea could be that we somehow map vacuum costing related
> parameters to the shared memory (dsm) which the vacuum operation is
> using and then allow workers to coordinate.  This way master and
> worker processes will have the same view of balance cost and can act
> accordingly.
>
> The other idea could be that we come up with some smart way to split
> the I/O among workers.  Initially, I thought we could try something as
> we do for autovacuum workers (see autovac_balance_cost), but I think
> that will require much more math.  Before launching workers, we need
> to compute the remaining I/O (heap operation would have used
> something) after which we need to sleep and continue the operation and
> then somehow split it equally across workers.  Once the workers are
> finished, then need to let master backend know how much I/O they have
> consumed and then master backend can add it to it's current I/O
> consumed.
>
> I think this problem matters because the vacuum delay is useful for
> large vacuums and this patch is trying to exactly solve that problem,
> so we can't ignore this problem.  I am not yet sure what is the best
> solution to this problem, but I think we need to do something for it.
>

I guess that the concepts of vacuum delay contradicts the concepts of
parallel vacuum. The concepts of parallel vacuum would be to use more
resource to make vacuum faster. Vacuum delays balances I/O during
vacuum in order to avoid I/O spikes by vacuum but parallel vacuum
rather concentrates I/O in shorter duration. Since we need to share
the memory in entire system we need to deal with the memory issue but
disks are different.

If we need to deal with this problem how about just dividing
vacuum_cost_limit by the parallel degree and setting it to worker's
vacuum_cost_limit?

Regards,

--
Masahiko Sawada



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Asif Rehman
Дата:
Сообщение: Re: WIP/PoC for parallel backup
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: recovery_min_apply_delay in archive recovery causes assertionfailure in latch