Re: maintenance_work_mem used by Vacuum

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: maintenance_work_mem used by Vacuum
Дата
Msg-id CAA4eK1Jtgv6j6oT2f3aYyztMUhM94gF5utmCEyKP7vVB6UjcwQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: maintenance_work_mem used by Vacuum  (Masahiko Sawada <sawada.mshk@gmail.com>)
Ответы Re: maintenance_work_mem used by Vacuum  (Masahiko Sawada <sawada.mshk@gmail.com>)
Список pgsql-hackers
On Fri, Oct 11, 2019 at 7:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Thu, Oct 10, 2019 at 6:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> >
> > It seems you want to say about commit id
> > a1b395b6a26ae80cde17fdfd2def8d351872f399.
>
> Yeah thanks.
>
> >  I wonder why they have not
> > changed it to gin_penidng_list_limit (at that time
> > pending_list_cleanup_size) in that commit itself?  I think if we want
> > to use gin_pending_list_limit in this context then we can replace both
> > work_mem and maintenance_work_mem with gin_penidng_list_limit.
>
> Hmm as far as I can see the discussion, no one mentioned about
> maintenance_work_mem. Perhaps we just oversighted?
>

It is possible, but we can't be certain until those people confirm the same.

> I also didn't know
> that.
>
> I also think we can replace at least the work_mem for cleanup of
> pending list with gin_pending_list_limit. In the following comment in
> ginfast.c,
>

Agreed, but that won't solve the original purpose for which we started
this thread.

> /*
>  * Force pending list cleanup when it becomes too long. And,
>  * ginInsertCleanup could take significant amount of time, so we prefer to
>  * call it when it can do all the work in a single collection cycle. In
>  * non-vacuum mode, it shouldn't require maintenance_work_mem, so fire it
>  * while pending list is still small enough to fit into
>  * gin_pending_list_limit.
>  *
>  * ginInsertCleanup() should not be called inside our CRIT_SECTION.
>  */
> cleanupSize = GinGetPendingListCleanupSize(index);
> if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)
>     needCleanup = true;
>
> ISTM the gin_pending_list_limit in the above comment corresponds to
> the following code in ginfast.c,
>
> /*
>  * We are called from regular insert and if we see concurrent cleanup
>  * just exit in hope that concurrent process will clean up pending
>  * list.
>  */
> if (!ConditionalLockPage(index, GIN_METAPAGE_BLKNO, ExclusiveLock))
>     return;
> workMemory = work_mem;
>
> If work_mem is smaller than gin_pending_list_limit the pending list
> cleanup would behave against the intention of the above comment that
> prefers to do all the work in a single collection cycle while pending
> list is still small enough to fit into gin_pending_list_limit.
>

That's right, but OTOH, if the user specifies gin_pending_list_limit
as an option during Create Index with a value greater than GUC
gin_pending_list_limit, then also we will face the same problem.  It
seems to me that we haven't thought enough on memory usage during Gin
pending list cleanup and I don't want to independently make a decision
to change it.  So unless the original author/committer or some other
people who have worked in this area share their opinion, we can leave
it as it is and make a parallel vacuum patch adapt to it.

The suggestion from others is welcome.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Fujii Masao
Дата:
Сообщение: Re: Standby accepts recovery_target_timeline setting?
Следующее
От: Pavel Stehule
Дата:
Сообщение: Re: PostgreSQL, C-Extension, calling other Functions