Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id 4de1faeb-5d1a-ffb7-6148-8237d6147e01@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Peter Eisentraut <peter.eisentraut@2ndquadrant.com>)
Список pgsql-hackers

On 01/11/2018 08:41 PM, Peter Eisentraut wrote:
> On 12/22/17 23:57, Tomas Vondra wrote:
>> PART 1: adding logical_work_mem memory limit (0001)
>> ---------------------------------------------------
>>
>> Currently, limiting the amount of memory consumed by logical decoding is
>> tricky (or you might say impossible) for several reasons:
> 
> I would like to see some more discussion on this, but I think not a lot
> of people understand the details, so I'll try to write up an explanation
> here.  This code is also somewhat new to me, so please correct me if
> there are inaccuracies, while keeping in mind that I'm trying to simplify.
> 
> ... snip ...

Thanks for a comprehensive summary of the patch!

> 
> "XXX With many subtransactions this might be quite slow, because we'll
> have to walk through all of them. There are some options how we could
> improve that: (a) maintain some secondary structure with transactions
> sorted by amount of changes, (b) not looking for the entirely largest
> transaction, but e.g. for transaction using at least some fraction of
> the memory limit, and (c) evicting multiple transactions at once, e.g.
> to free a given portion of the memory limit (e.g. 50%)."
> 
> (a) would create more overhead for the case where everything fits into
> memory, so it seems unattractive.  Some combination of (b) and (c) seems
> useful, but we'd have to come up with something concrete.
> 

Yeah, when writing that comment I was worried that (a) might get rather
expensive. I was thinking about maintaining a dlist of transactions
sorted by size (ReorderBuffer now only has a hash table), so that we
could evict transactions from the beginning of the list.

But while that speeds up the choice of transactions to evict, the added
cost is rather high, particularly when most transactions are roughly of
the same size. Because in that case we probably have to move the nodes
around in the list quite often. So it seems wiser to just walk the list
once when looking for a victim.

What I'm thinking about instead is tracking just some approximated
version of this - it does not really matter whether we evict the really
largest transaction or one that is a couple of kilobytes smaller. What
we care about is an answer to this question:

    Is there some very large transaction that we could evict to free
    a lot of memory, or are all transactions fairly small?

So perhaps we can define some "size classes" and track to which of them
each transaction belongs. For example, we could split the memory limit
into 100 buckets, each representing a 1% size increment.

A transaction would not switch the class very often, and it would be
trivial to pick the largest transaction. When all the transactions are
squashed in the smallest classes, we may switch to some alternative
strategy. Not sure.

In any case, I don't really know how expensive the selection actually
is, and if it's an issue. I'll do some measurements.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: Possible performance regression with pg_dump of a large numberof relations
Следующее
От: Tomas Vondra
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions