Re: [HACKERS] logical decoding of two-phase transactions

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: [HACKERS] logical decoding of two-phase transactions
Дата
Msg-id 20180723182342.okofpy6kyi7oqaql@alap3.anarazel.de
обсуждение исходный текст
Ответ на Re: [HACKERS] logical decoding of two-phase transactions  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Hi,

On 2018-07-23 12:38:25 -0400, Robert Haas wrote:
> On Mon, Jul 23, 2018 at 12:13 PM, Andres Freund <andres@anarazel.de> wrote:
> > My point is that we could just make HTSV treat them as recently dead, without incurring the issues of the bug you
referenced.
> 
> That doesn't seem sufficient.  For example, it won't keep the
> predecessor tuple's ctid field from being overwritten by a subsequent
> updater -- and if that happens then the update chain is broken.

Sure. I wasn't arguing that it'd be sufficient. Just that the specific
issue that it'd bring the bug you mentioned isn't right.  I agree that
it's quite terrifying to attempt to get this right.


> Maybe your idea of cross-checking at the end of each syscache lookup
> would be sufficient to prevent that from happening, though.

Hm? If we go for that approach we would not do *anything* about pruning,
which is why I think it has appeal. Because we'd check at the end of
system table scans (not syscache lookups, positive cache hits are fine
because of invalidation handling) whether the to-be-decoded transaction
aborted, we'd not need to do anything about pruning: If the transaction
aborted, we're guaranteed to know - the result might have been wrong,
but since we error out before filling any caches, we're ok.  If it
hasn't yet aborted at the end of the scan, we conversely are guaranteed
that the scan results are correct.

Greetings,

Andres Freund


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: How can we submit code patches that implement our (pending)patents?
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Making "COPY partitioned_table FROM" faster