Re: Conflict Detection and Resolution

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Conflict Detection and Resolution
Дата
Msg-id CAA4eK1+Vf2CJZz-v3maMHrQmAbC6ABU-QWPwOvoNGvwOG2nneA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Conflict Detection and Resolution  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: Conflict Detection and Resolution
Список pgsql-hackers
On Fri, Jul 5, 2024 at 11:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Thu, Jul 4, 2024 at 5:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >  So, the situation will be the same. We can even
> > > > decide to spill the data to files if the decision is that we need to
> > > > wait to avoid network buffer-fill situations. But note that the wait
> > > > in apply worker has consequences that the subscriber won't be able to
> > > > confirm the flush position and publisher won't be able to vacuum the
> > > > dead rows and we won't be remove WAL as well. Last time when we
> > > > discussed the delay_apply feature, we decided not to proceed because
> > > > of such issues. This is the reason I proposed a cap on wait time.
> > >
> > > Yes, spilling to file or cap on the wait time should help, and as I
> > > said above maybe a parallel apply worker can also help.
> > >
> >
> > It is not clear to me how a parallel apply worker can help in this
> > case. Can you elaborate on what you have in mind?
>
> If we decide to wait at commit time, and before starting to apply if
> we already see a remote commit_ts clock is ahead, then if we apply
> such transactions using the parallel worker, wouldn't it solve the
> issue of the network buffer congestion? Now the apply worker can move
> ahead and fetch new transactions from the buffer as our waiting
> transaction will not block it.  I understand that if this transaction
> is going to wait at commit then any future transaction that we are
> going to fetch might also going to wait again because if the previous
> transaction committed before is in the future then the subsequent
> transaction committed after this must also be in future so eventually
> that will also go to some another parallel worker and soon we end up
> consuming all the parallel worker if the clock skew is large.  So I
> won't say this will resolve the problem and we would still have to
> fall back to the spilling to the disk but that's just in the worst
> case when the clock skew is really huge.  In most cases which is due
> to slight clock drift by the time we apply the medium to large size
> transaction, the local clock should be able to catch up the remote
> commit_ts and we might not have to wait in most of the cases.
>

Yeah, this is possible but even if go with the spilling logic at first
it should work for all cases. If we get some complaints then we can
explore executing such transactions by parallel apply workers.
Personally, I am of the opinion that clock synchronization should be
handled outside the database system via network time protocols like
NTP. Still, we can have some simple solution to inform users about the
clock_skew.

--
With Regards,
Amit Kapila.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Joel Jacobson"
Дата:
Сообщение: Re: pgsql: Add pg_get_acl() to get the ACL for a database object
Следующее
От: "Hayato Kuroda (Fujitsu)"
Дата:
Сообщение: RE: Slow catchup of 2PC (twophase) transactions on replica in LR