Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Alexey Kondratov
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id 6b0edf8b-0b33-f862-dfb2-d8bb2b568465@postgrespro.ru
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Ответы Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Список pgsql-hackers
On 04.11.2019 13:05, Kuntal Ghosh wrote:
> On Mon, Nov 4, 2019 at 3:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>> So your result shows that with "streaming on", performance is
>> degrading?  By any chance did you try to see where is the bottleneck?
>>
> Right. But, as we increase the logical_decoding_work_mem, the
> performance improves. I've not analyzed the bottleneck yet. I'm
> looking into the same.

My guess is that 64 kB is just too small value. In the table schema used 
for tests every rows takes at least 24 bytes for storing column values. 
Thus, with this logical_decoding_work_mem value the limit should be hit 
after about 2500+ rows, or about 400 times during transaction of 1000000 
rows size.

It is just too frequent, while ReorderBufferStreamTXN includes a whole 
bunch of logic, e.g. it always starts internal transaction:

/*
  * Decoding needs access to syscaches et al., which in turn use
  * heavyweight locks and such. Thus we need to have enough state around to
  * keep track of those.  The easiest way is to simply use a transaction
  * internally.  That also allows us to easily enforce that nothing writes
  * to the database by checking for xid assignments. ...
  */

Also it issues separated stream_start/stop messages around each streamed 
transaction chunk. So if streaming starts and stops too frequently it 
adds additional overhead and may even interfere with current in-progress 
transaction.

If I get it correctly, then it is rather expected with too small values 
of logical_decoding_work_mem. Probably it may be optimized, but I am not 
sure that it is worth doing right now.


Regards

-- 
Alexey Kondratov

Postgres Professional https://www.postgrespro.com
Russian Postgres Company




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Masahiko Sawada
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum
Следующее
От: "k.jamison@fujitsu.com"
Дата:
Сообщение: RE: [Patch] Optimize dropping of relation buffers using dlist