Re: [PATCH] Reuse Workers and Replication Slots during Logical Replication

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: [PATCH] Reuse Workers and Replication Slots during Logical Replication
Дата
Msg-id CAA4eK1LnMfHpcjjjgiYENOuTvUhtDzGf9GZc626ZRChayp9abw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [PATCH] Reuse Workers and Replication Slots during Logical Replication  (Melih Mutlu <m.melihmutlu@gmail.com>)
Ответы Re: [PATCH] Reuse Workers and Replication Slots during Logical Replication  (Melih Mutlu <m.melihmutlu@gmail.com>)
Список pgsql-hackers
On Wed, Jul 5, 2023 at 1:48 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:
>
> Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 4 Tem 2023 Sal,
> 08:42 tarihinde şunu yazdı:
> > > > But in the later patch the tablesync worker tries to reuse the slot during the
> > > > synchronization, so in this case the application_name should be same as
> > > slotname.
> > > >
> > >
> > > Fair enough. I am slightly afraid that if we can't show the benefits
> > > with later patches then we may need to drop them but at this stage I
> > > feel we need to investigate why those are not helping?
> >
> > Agreed. Now I'm planning to do performance testing independently. We can discuss
> > based on that or Melih's one.
>
> Here I attached  what I use for performance testing of this patch.
>
> I only benchmarked the patch set with reusing connections very roughly
> so far. But seems like it improves quite significantly. For example,
> it took 611 ms to sync 100 empty tables, it was 1782 ms without
> reusing connections.
> First 3 patches from the set actually bring a good amount of
> improvement, but not sure about the later patches yet.
>

I suggest then we should focus first on those 3, get them committed
and then look at the remaining.

> Amit Kapila <amit.kapila16@gmail.com>, 3 Tem 2023 Pzt, 08:59 tarihinde
> şunu yazdı:
> > On thinking about this, I think the primary benefit we were expecting
> > by saving network round trips for slot drop/create but now that we
> > anyway need an extra round trip to establish a snapshot, so such a
> > benefit was not visible. This is just a theory so we should validate
> > it. The another idea as discussed before [1] could be to try copying
> > multiple tables in a single transaction. Now, keeping a transaction
> > open for a longer time could have side-effects on the publisher node.
> > So, we probably need to ensure that we don't perform multiple large
> > syncs and even for smaller tables (and later sequences) perform it
> > only for some threshold number of tables which we can figure out by
> > some tests. Also, the other safety-check could be that anytime we need
> > to perform streaming (sync with apply worker), we won't copy more
> > tables in same transaction.
> >
> > Thoughts?
>
> Yeah, maybe going to the publisher for creating a slot or only a
> snapshot does not really make enough difference. I was hoping that
> creating only snapshot by an existing replication slot would help the
> performance. I guess I was either wrong or am missing something in the
> implementation.
>
> The tricky bit with keeping a long transaction to copy multiple tables
> is deciding how many tables one transaction can copy.
>

Yeah, I was thinking that we should not allow copying some threshold
data in one transaction. After every copy, we will check the size of
the table and add it to the previously copied table size in the same
transaction. Once the size crosses a certain threshold, we will end
the transaction. This may not be a very good scheme but I think it
this helps then it would be much simpler than creating-only-snapshot
approach.

--
With Regards,
Amit Kapila.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Nathan Bossart
Дата:
Сообщение: Re: Should we remove db_user_namespace?
Следующее
От: Gurjeet Singh
Дата:
Сообщение: Re: MERGE ... RETURNING