Re: Single transaction in the tablesync worker?

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Single transaction in the tablesync worker?
Дата
Msg-id CAA4eK1+V2Ok2ocKszGgCXTnf3uOSB4yPSovcbjhQ3ZOPARq4vg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Single transaction in the tablesync worker?  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Fri, Dec 4, 2020 at 10:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Dec 4, 2020 at 7:53 AM Craig Ringer
> <craig.ringer@enterprisedb.com> wrote:
> >
> > On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > > Is there any fundamental problem if
> > > we commit the transaction after initial copy and slot creation in
> > > LogicalRepSyncTableStart and then allow the apply of transactions as
> > > it happens in apply worker?
> >
> > No fundamental problem. Both approaches are fine. Committing the
> > initial copy then doing the rest in individual txns means an
> > incomplete sync state for the table becomes visible, which may not be
> > ideal. Ideally we'd do something like sync the data into a clone of
> > the table then swap the table relfilenodes out once we're synced up.
> >
> > IMO the main advantage of committing as we go is that it would let us
> > use a non-temporary slot and support recovering an incomplete sync and
> > finishing it after interruption by connection loss, crash, etc. That
> > would be advantageous for big table syncs or where the sync has lots
> > of lag to replay. But it means we have to remember sync states, and
> > give users a way to cancel/abort them. Otherwise forgotten temp slots
> > for syncs will cause a mess on the upstream.
> >
> > It also allows the sync slot to advance, freeing any held upstream
> > resources before the whole sync is done, which is good if the upstream
> > is busy and generating lots of WAL.
> >
> > Finally, committing as we go means we won't exceed the cid increment
> > limit in a single txn.
> >
>
> Yeah, all these are advantages of processing
> transaction-by-transaction. IIUC, we need to primarily do two things
> to achieve it, one is to have an additional state in the catalog (say
> catch up) which will say that the initial copy is done. Then we need
> to have a permanent slot using which we can track the progress of the
> slot so that after restart (due to crash, connection break, etc.) we
> can start from the appropriate position.
>
> Apart from the above, I think with the current design of tablesync we
> can see partial data of transactions because we allow all the
> tablesync workers to run parallelly. Consider the below scenario:
>
> CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
> CREATE TABLE mytbl2(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
>
> Tx1
> BEGIN;
> INSERT INTO mytbl1(somedata, text) VALUES (1, 1);
> INSERT INTO mytbl2(somedata, text) VALUES (1, 1);
> COMMIT;
>
> CREATE PUBLICATION mypublication FOR TABLE mytbl;
>

oops, the above statement should be CREATE PUBLICATION mypublication
FOR TABLE mytbl1, mytbl2;

-- 
With Regards,
Amit Kapila.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kyotaro Horiguchi
Дата:
Сообщение: Re: [Patch] Optimize dropping of relation buffers using dlist
Следующее
От: Kyotaro Horiguchi
Дата:
Сообщение: Re: [Patch] Optimize dropping of relation buffers using dlist