Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAA4eK1KZ56PoT--pCXNyE-ebrXjWvOz_ntqhqBQs-8fn6S9m2w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Alexey Kondratov <a.kondratov@postgrespro.ru>)
Список pgsql-hackers
On Wed, Oct 23, 2019 at 12:32 AM Alexey Kondratov
<a.kondratov@postgrespro.ru> wrote:
>
> On 22.10.2019 20:22, Tomas Vondra wrote:
> >
> > I think the patch should do the simplest thing possible, i.e. what it
> > does today. Otherwise we'll never get it committed.
> >
>
> I have to agree with Tomas, that keeping things as simple as possible
> should be a main priority right now. Otherwise, the entire patch set
> will pass next release cycle without being committed at least partially.
> In the same time, it resolves important problem from my perspective. It
> moves I/O overhead from primary to replica using large transactions
> streaming, which is a nice to have feature I guess.
>
> Later it would be possible to replace logical apply worker with
> bgworkers pool in a separated patch, if we decide that it is a viable
> solution. Anyway, regarding the Amit's questions:
>
> - I doubt that maintaining a separate buffer on the apply side before
> spilling to disk would help enough. We already have ReorderBuffer with
> logical_work_mem limit, and if we exceeded that limit on the sender
> side, then most probably we exceed it on the applier side as well,
>

I think on the sender side, the limit is for un-filtered changes
(which means on the ReorderBuffer which has all the changes) whereas,
on the receiver side, we will only have the requested changes which
can make a difference?

> excepting the case when this new buffer size will be significantly
> higher then logical_work_mem to keep multiple open xacts.
>

I am not sure but I think we can have different controlling parameters
on the subscriber-side.

> - I still think that we should optimize database for commits, not
> rollbacks. BGworkers pool is dramatically slower for rollbacks-only
> load, though being at least twice as faster for commits-only. I do not
> know how it will perform with real life load, but this drawback may be
> inappropriate for such a general purpose database like Postgres.
>
> - Tomas' implementation of streaming with spilling does not have this
> bias between commits/aborts. However, it has a noticeable performance
> drop (~x5 slower compared with master [1]) for large transaction
> consisting of many small rows. Although it is not of an order of
> magnitude slower.
>

Did you ever identify the reason why it was slower in that case?  I
can see the numbers shared by you and Dilip which shows that the
BGWorker pool is a really good idea and will work great for
commit-mostly workload whereas the numbers without that are not very
encouraging, maybe we have not benchmarked enough.  This is the reason
I am trying to see if we can do something to get the benefits similar
to what is shown by your idea.

I am not against doing something simple for the first version and then
enhance it later, but it won't be good if we commit it with regression
in some typical cases and depend on the user to use it when it seems
favorable to its case.  Also, sometimes it becomes difficult to
generate enthusiasm to enhance the feature once the main patch is
committed.  I am not telling that always happens or will happen in
this case.  It is better if we put some energy and get things as good
as possible in the first go itself.  I am as much interested as you,
Tomas or others are, otherwise, I wouldn't have spent a lot of time on
this to disentangle it from 2PC patch which seems to get stalled due
to lack of interest.

> Another thing is it that about a year ago I have found some problems
> with MVCC/visibility and fixed them somehow [1]. If I get it correctly
> Tomas adapted some of those fixes into his patch set, but I think that
> this part should be reviewed carefully again.
>

Agreed, I have read your emails and could see that you have done very
good work on this project along with Tomas.  But unfortunately, it
didn't get committed.  At this stage, we are working on just the first
part of the patch which is to allow the data to spill once it crosses
the logical_decoding_work_mem on the master side.  I think we need
more problems to discuss and solve once that is done.

> I would be glad to check
> it, but now I am a little bit confused with all the patch set variants
> in the thread. Which is the last one? Is it still dependent on 2pc decoding?
>

I think the latest patches posted by Dilip are not dependent on
logical decoding, but I haven't studied them yet.  You can find those
at [1][2].  As per discussion in this thread, we are also trying to
see if we can make some part of the patch-series committed first, the
latest patches corresponding to which are posted at [3].

[1] - https://www.postgresql.org/message-id/CAFiTN-vHoksqvV4BZ0479NhugGe4QHq_ezngNdDd-YRQ_2cwug%40mail.gmail.com
[2] - https://www.postgresql.org/message-id/CAFiTN-vT%2B42xRbkw%3DhBnp44XkAyZaKZVA5hcvAMsYth3rk7vhg%40mail.gmail.com
[3] - https://www.postgresql.org/message-id/CAFiTN-vkFB0RBEjVkLWhdgTYShSrSu3kCYObMghgXEwKA1FXRA%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock onobject 14185/39327/0 is already held
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: Questions/Observations related to Gist vacuum