Re: maintenance_work_mem used by Vacuum

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: maintenance_work_mem used by Vacuum
Дата
Msg-id CAD21AoDmXvnLE18dZ4BzqwhOyAyr8mqbipbB7sfk52dB5y1aFQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: maintenance_work_mem used by Vacuum  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: maintenance_work_mem used by Vacuum
Список pgsql-hackers
On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Oct 11, 2019 at 7:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Thu, Oct 10, 2019 at 6:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > >
> > > It seems you want to say about commit id
> > > a1b395b6a26ae80cde17fdfd2def8d351872f399.
> >
> > Yeah thanks.
> >
> > >  I wonder why they have not
> > > changed it to gin_penidng_list_limit (at that time
> > > pending_list_cleanup_size) in that commit itself?  I think if we want
> > > to use gin_pending_list_limit in this context then we can replace both
> > > work_mem and maintenance_work_mem with gin_penidng_list_limit.
> >
> > Hmm as far as I can see the discussion, no one mentioned about
> > maintenance_work_mem. Perhaps we just oversighted?
> >
>
> It is possible, but we can't be certain until those people confirm the same.
>
> > I also didn't know
> > that.
> >
> > I also think we can replace at least the work_mem for cleanup of
> > pending list with gin_pending_list_limit. In the following comment in
> > ginfast.c,
> >
>
> Agreed, but that won't solve the original purpose for which we started
> this thread.
>
> > /*
> >  * Force pending list cleanup when it becomes too long. And,
> >  * ginInsertCleanup could take significant amount of time, so we prefer to
> >  * call it when it can do all the work in a single collection cycle. In
> >  * non-vacuum mode, it shouldn't require maintenance_work_mem, so fire it
> >  * while pending list is still small enough to fit into
> >  * gin_pending_list_limit.
> >  *
> >  * ginInsertCleanup() should not be called inside our CRIT_SECTION.
> >  */
> > cleanupSize = GinGetPendingListCleanupSize(index);
> > if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)
> >     needCleanup = true;
> >
> > ISTM the gin_pending_list_limit in the above comment corresponds to
> > the following code in ginfast.c,
> >
> > /*
> >  * We are called from regular insert and if we see concurrent cleanup
> >  * just exit in hope that concurrent process will clean up pending
> >  * list.
> >  */
> > if (!ConditionalLockPage(index, GIN_METAPAGE_BLKNO, ExclusiveLock))
> >     return;
> > workMemory = work_mem;
> >
> > If work_mem is smaller than gin_pending_list_limit the pending list
> > cleanup would behave against the intention of the above comment that
> > prefers to do all the work in a single collection cycle while pending
> > list is still small enough to fit into gin_pending_list_limit.
> >
>
> That's right, but OTOH, if the user specifies gin_pending_list_limit
> as an option during Create Index with a value greater than GUC
> gin_pending_list_limit, then also we will face the same problem.  It
> seems to me that we haven't thought enough on memory usage during Gin
> pending list cleanup and I don't want to independently make a decision
> to change it.  So unless the original author/committer or some other
> people who have worked in this area share their opinion, we can leave
> it as it is and make a parallel vacuum patch adapt to it.

Yeah I totally agreed.

Apart from the GIN problem can we discuss whether need to change the
current memory usage policy of parallel utility command described in
the doc? We cannot control the memory usage in index AM after all but
we need to generically consider how much memory is used during
parallel vacuum.

Regards,

--
Masahiko Sawada



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Pavel Stehule
Дата:
Сообщение: Re: [Proposal] Global temporary tables
Следующее
От: Peter Geoghegan
Дата:
Сообщение: Re: BTP_DELETED leaf still in tree