Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAA4eK1KiLzSn8P=rdemZNUs8pkCf9q3VUtWiS9jOjfX2tv=0Mw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Список pgsql-hackers
On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> I have observed some more issues
>
> 1. Currently, In ReorderBufferCommit, it is always expected that
> whenever we get REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, we must
> have already got REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT and in
> SPEC_CONFIRM we send the tuple we got in SPECT_INSERT.  But, now those
> two messages can be in different streams.  So we need to find a way to
> handle this.  Maybe once we get SPEC_INSERT then we can remember the
> tuple and then if we get the SPECT_CONFIRM in the next stream we can
> send that tuple?
>

Your suggestion makes sense to me.  So, we can try it.

> 2. During commit time in DecodeCommit we check whether we need to skip
> the changes of the transaction or not by calling
> SnapBuildXactNeedsSkip but since now we support streaming so it's
> possible that before we decode the commit WAL, we might have already
> sent the changes to the output plugin even though we could have
> skipped those changes.  So my question is instead of checking at the
> commit time can't we check before adding to ReorderBuffer itself
>

I think if we can do that then the same will be true for current code
irrespective of this patch.  I think it is possible that we can't take
that decision while decoding because we haven't assembled a consistent
snapshot yet.  I think we might be able to do that while we try to
stream the changes.  I think we need to take care of all the
conditions during streaming (when the logical_decoding_workmem limit
is reached) as we do in DecodeCommit.  This needs a bit more study.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dilip Kumar
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions