Re: Skip collecting decoded changes of already-aborted transactions

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: Skip collecting decoded changes of already-aborted transactions
Дата
Msg-id CAFiTN-v+fpnqTdD_bmNHf7XRiQVKB+L6XN5ydOMGMZ9y7Q117w@mail.gmail.com
обсуждение исходный текст
Ответ на Skip collecting decoded changes of already-aborted transactions  (Masahiko Sawada <sawada.mshk@gmail.com>)
Ответы Re: Skip collecting decoded changes of already-aborted transactions
Список pgsql-hackers
On Tue, Nov 26, 2024 at 3:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

>
> I've attached a new version patch that incorporates all comments I got so far.
>
> I think the patch is in good shape but I'm considering whether we
> might want to call ReorderBufferToastReset() after truncating all
> changes, in ReorderBufferTruncateTXNIfAborted() just in case. Will
> investigate further.
>

There’s something that seems a bit odd to me. Consider the case where
the largest transaction(s) are aborted. If
ReorderBufferCanStartStreaming() returns true, the changes from this
transaction will only be discarded if it's a streamable transaction.
However, if ReorderBufferCanStartStreaming() is false, the changes
will be discarded regardless.

What seems strange to me in this patch is truncating the changes of a
large aborted transaction depending on whether we need to stream or
spill but actually that should be completely independent IMHO. My
concern is that if the largest transaction is aborted but isn’t yet
streamable, we might end up picking the next transaction, which could
be much smaller. This smaller transaction might not help us stay
within the memory limit, and we could repeat this process for a few
more transactions. In contrast, it might be more efficient to simply
discard the large aborted transaction, even if it’s not streamable, to
avoid this issue.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления: