Re: What should I expect when creating many logical replication slots?

Поиск
Список
Период
Сортировка
От Antonin Bas
Тема Re: What should I expect when creating many logical replication slots?
Дата
Msg-id CAAkB0aAThnvuLgzeCuyE_S-6kQa_n3UCcf0xR-3_ufC3h7uHqg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: What should I expect when creating many logical replication slots?  (Jim Nasby <jim.nasby@gmail.com>)
Список pgsql-general
Hi Jim. Thanks for taking the time to reply. Please see below.

Le mar. 16 janv. 2024 à 10:51, Jim Nasby <jim.nasby@gmail.com> a écrit :
On 1/11/24 6:17 PM, Antonin Bas wrote:
> Hi all,
>
> I have a use case for which I am considering using Postgres Logical
> Replication, but I would like to scale up to 100 or even 200
> replication slots.
>
> I have increased max_wal_senders and max_replication_slots to 100 (also
> making sure that max_connections is large enough). Things seem to be
> working pretty well so far based on some PoC code I have written.
> Postgres is creating a walsender process for each replication slot, as
> expected, and the memory footprint of each one is around 4MB.
>
> So I am quite happy with the way things are working, but I am a bit
> uneasy about increasing these configuration values by 10-20x compared to
> their defaults (both max_wal_senders and max_replication_slots default
> to 10).
>
> Is there anything I should be looking out for specifically? Is it
> considered an anti-pattern to use that many replication slots and
> walsender processes? And, when my database comes under heavy write load,
> will walsender processes start consuming a large amount of CPU / memory
> (I recognize that this is a vague question, I am still working on some
> empirical testing).

The biggest issue with logical decoding (what drives logical
replication) is that every subscriber has to completely decode
everything for it's publication, which can be extremely memory intensive
under certain circumstances (long running transacitons being one
potential trigger). Decoders also have to read through all WAL traffic,
regardless of what their publication is set to - everything runs of the
single WAL stream.

That seems to be the biggest issue for me.
I wanted to create a publication for a single table with a low change rate. But it sounds based on what you are describing that if the transaction rate is very high for other tables (not part of the publication), it will affect resource consumption of the walsender processes, which will have to decode unrelated WAL traffic. Am I understanding correctly?
 

Note that this only applies to actually decoding - simply having a large
number of slots isn't much of an issue. Even having a large number of
subscribers that aren't consuming isn't a resource issue (though it IS
an issue for MVCC / vacuuming!) - to test you need to have all the
decoders that you expect to support.

Ultimately, I'd be concerned with trying to support 100+ slots unless
you know that your change rate isn't super high and that you don't have
long-running transactions.
--
Jim Nasby, Data Architect, Austin TX

В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Nolan
Дата:
Сообщение: Re: undefined symbol when installing pgcrypto on 16.1
Следующее
От: Vicky Vergara
Дата:
Сообщение: How to handle postgres redefinition of std::snprintf to pg_snprintf in C++ code