Re: logical decoding and replication of sequences, take 2

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: logical decoding and replication of sequences, take 2
Дата
Msg-id 547f6488-2365-770d-6801-a676ba433e5f@enterprisedb.com
обсуждение исходный текст
Ответ на Re: logical decoding and replication of sequences, take 2  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Список pgsql-hackers
Hi!

Considering my findings about issues with the rd_newRelfilelocatorSubid
field and how it makes that approach impossible, I decided to rip out
those patches, and go back to the approach where reorderbuffer tracks
new relfilenodes. This means the open questions I listed two days ago
disappear, because all of that was about the alternative approach.

I've also added a couple more tests into 034_sequences.pl, testing the
basic cases with substransactions that rollback (or not), etc. The
attached patch also addresses the review comments by Peter Smith.

The one remaining open question is ReorderBufferSequenceIsTransactional
and whether it can do better than searching through all top-level
transactions. The idea of 0002 was to only search the current top-level
xact, but Amit pointed out we can't rely on seeing the assignment until
we know we're in a consistent snapshot.

I'm yet to try doing some tests to measure how expensive this lookup can
be in practice. But let's assume it's measurable and significant enough
to matter. I wonder if we could salvage this optimization somehow. I'm
thinking about three options:

1) Could ReorderBufferSequenceIsTransactional check the snapshot is
already consistent etc. and use the optimized variant (looking only at
the same top-level xact) in that case? And if not, fallback to the
search of all top-level xacts. In practice, the full search would be
used only for a short initial period.

2) We could also make ReorderBufferSequenceIsTransactional to always
check the same top-level transaction first and then fallback, no matter
whether the snapshot is consistent or not. The problem is this doesn't
really optimize the common case where there are no new relfilenodes, so
we won't find a match in the top-level xact, and will always search
everything anyway.

3) Alternatively, we could maintain a global hash table, instead of in
the top-level transaction. So there'd always be two copies, one in the
xact itself and then in the global hash. Now there's either one (in
current top-level xact), or two (subxact + top-level xact).

I kinda like (3), because it just works and doesn't require the snapshot
being consistent etc.


Opinions?

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andrew Dunstan
Дата:
Сообщение: Re: Python installation selection in Meson
Следующее
От: John Naylor
Дата:
Сообщение: Re: Change GUC hashtable to use simplehash?