Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id e601e90e-f8d7-4e07-fbf0-90ea264085c2@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Craig Ringer <craig@2ndquadrant.com>)
Список pgsql-hackers

On 12/24/2017 05:51 AM, Craig Ringer wrote:
> On 23 December 2017 at 12:57, Tomas Vondra <tomas.vondra@2ndquadrant.com
> <mailto:tomas.vondra@2ndquadrant.com>> wrote:
> 
>     Hi all,
> 
>     Attached is a patch series that implements two features to the logical
>     replication - ability to define a memory limit for the reorderbuffer
>     (responsible for building the decoded transactions), and ability to
>     stream large in-progress transactions (exceeding the memory limit).
> 
>     I'm submitting those two changes together, because one builds on the
>     other, and it's beneficial to discuss them together.
> 
> 
>     PART 1: adding logical_work_mem memory limit (0001)
>     ---------------------------------------------------
> 
>     Currently, limiting the amount of memory consumed by logical decoding is
>     tricky (or you might say impossible) for several reasons:
> 
>     * The value is hard-coded, so it's not quite possible to customize it.
> 
>     * The amount of decoded changes to keep in memory is restricted by
>     number of changes. It's not very unclear how this relates to memory
>     consumption, as the change size depends on table structure, etc.
> 
>     * The number is "per (sub)transaction", so a transaction with many
>     subtransactions may easily consume significant amount of memory without
>     actually hitting the limit.
> 
> 
> Also, even without subtransactions, we assemble a ReorderBufferTXN
> per transaction. Since transactions usually occur concurrently,
> systems with many concurrent txns can face lots of memory use.
> 

I don't see how that could be a problem, considering the number of
toplevel transactions is rather limited (to max_connections or so).

> We can't exclude tables that won't actually be replicated at the reorder
> buffering phase either. So txns use memory whether or not they do
> anything interesting as far as a given logical decoding session is
> concerned. Even if we'll throw all the data away we must buffer and
> assemble it first so we can make that decision.

Yep.

> Because logical decoding considers snapshots and cid increments even
> from other DBs (at least when the txn makes catalog changes) the memory
> use can get BIG too. I was recently working with a system that had
> accumulated 2GB of snapshots ... on each slot. With 7 slots, one for
> each DB.
> 
> So there's lots of room for difficulty with unpredictable memory use.
> 

Yep.

>     So the patch does two things. Firstly, it introduces logical_work_mem, a
>     GUC restricting memory consumed by all transactions currently kept in
>     the reorder buffer
> 
> 
> Does this consider the (currently high, IIRC) overhead of tracking
> serialized changes?
>  

Consider in what sense?


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Erik Rijkers
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Reproducible builds: genbki.pl and Gen_fmgrtab.pl