Re: Improving the latch handling between logical replication launcher and worker processes.

Поиск
Список
Период
Сортировка
От vignesh C
Тема Re: Improving the latch handling between logical replication launcher and worker processes.
Дата
Msg-id CALDaNm3AfrQ+XkL+9sOQu_Yo28Qkf9DW=ZSD98pCnM4GcuT8qQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Improving the latch handling between logical replication launcher and worker processes.  (Heikki Linnakangas <hlinnaka@iki.fi>)
Ответы Re: Improving the latch handling between logical replication launcher and worker processes.
Список pgsql-hackers
On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>
> I'm don't quite understand the problem we're trying to fix:
>
> > Currently the launcher's latch is used for the following: a) worker
> > process attach b) worker process exit and c) subscription creation.
> > Since this same latch is used for multiple cases, the launcher process
> > is not able to handle concurrent scenarios like: a) Launcher started a
> > new apply worker and waiting for apply worker to attach and b) create
> > subscription sub2 sending launcher wake up signal. In this scenario,
> > both of them will set latch of the launcher process, the launcher
> > process is not able to identify that both operations have occurred 1)
> > worker is attached 2) subscription is created and apply worker should
> > be started. As a result the apply worker does not get started for the
> > new subscription created immediately and gets started after the
> > timeout of 180 seconds.
>
> I don't see how we could miss a notification. Yes, both events will set
> the same latch. Why is that a problem?

The launcher process waits for the apply worker to attach at
WaitForReplicationWorkerAttach function. The launcher's latch is
getting set concurrently for another create subscription and apply
worker attached. The launcher now detects the latch is set while
waiting at WaitForReplicationWorkerAttach, it resets the latch and
proceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as
the latch has already been reset). Further details are provided below.

The loop will see that the new
> worker has attached, and that the new subscription has been created, and
> process both events. Right?

Since the latch is reset at WaitForReplicationWorkerAttach, it skips
processing the create subscription event.

Slightly detailing further:
In the scenario when we execute two concurrent create subscription
commands, first CREATE SUBSCRIPTION sub1, followed immediately by
CREATE SUBSCRIPTION sub2.
In a few random scenarios, the following events may unfold:
After the first create subscription command(sub1), the backend will
set the launcher's latch because of ApplyLauncherWakeupAtCommit.
Subsequently, the launcher process will reset the latch and identify
the addition of a new subscription in the pg_subscription list. The
launcher process will proceed to request the postmaster to start the
apply worker background process (sub1) and request the postmaster to
notify the launcher once the apply worker(sub1) has been started.
Launcher will then wait for the apply worker(sub1) to attach in the
WaitForReplicationWorkerAttach function.
Meanwhile, the second CREATE SUBSCRIPTION command (sub2) which was
executed concurrently, also set the launcher's latch(because of
ApplyLauncherWakeupAtCommit).
Simultaneously when the launcher remains in the
WaitForReplicationWorkerAttach function waiting for apply worker of
sub1 to start, the postmaster will start the apply worker for
subscription sub1 and send a SIGUSR1 signal to the launcher process
via ReportBackgroundWorkerPID. Upon receiving this signal, the
launcher process will then set its latch.

Now, the launcher's latch has been set concurrently after the apply
worker for sub1 is started and the execution of the CREATE
SUBSCRIPTION sub2 command.

At this juncture, the launcher, which had been awaiting the attachment
of the apply worker, detects that the latch is set and proceeds to
reset it. The reset operation of the latch occurs within the following
section of code in WaitForReplicationWorkerAttach:
...
rc = WaitLatch(MyLatch,
   WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
   10L, WAIT_EVENT_BGWORKER_STARTUP);

if (rc & WL_LATCH_SET)
{
ResetLatch(MyLatch);
CHECK_FOR_INTERRUPTS();
}
...

After resetting the latch here, the activation signal intended to
start the apply worker for subscription sub2 is no longer present. The
launcher will return to the ApplyLauncherMain function, where it will
await the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before
processing the create subscription request (i.e., creating a new apply
worker for sub2).
The issue arises from the latch being reset in
WaitForReplicationWorkerAttach, which can occasionally delay the
synchronization of table data for the subscription.
Another concurrency scenario, as mentioned by Alexander Lakhin at [3],
involves "apply worker exiting with failure and concurrent create
subscription." The underlying cause for both issues is from the same
source: the reset of the latch in WaitForReplicationWorkerAttach.
Even though there is no failure, such issues contribute to a delay of
180 seconds in the execution of buildfarm tests and local tests.

This was noticed in buildfarm where 026_stats.pl took more than 180
seconds at [2]:
...
[21:11:21] t/025_rep_changes_for_schema.pl .... ok     3340 ms ( 0.00
usr  0.00 sys +  0.43 cusr  0.23 csys =  0.66 CPU)
[21:11:25] t/026_stats.pl ..................... ok     4499 ms ( 0.00
usr  0.00 sys +  0.42 cusr  0.21 csys =  0.63 CPU)
[21:14:31] t/027_nosuperuser.pl ............... ok   185457 ms ( 0.01
usr  0.00 sys +  5.91 cusr  2.68 csys =  8.60 CPU)
[21:14:35] t/028_row_filter.pl ................ ok     3998 ms ( 0.01
usr  0.00 sys +  0.86 cusr  0.34 csys =  1.21 CPU)
...

And also by Robert at [3].

[1] - https://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com
[2] -
https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=snakefly&dt=2024-02-01%2020%3A34%3A03&stg=subscription-check
[3] -
https://www.postgresql.org/message-id/CA%2BTgmoZkj%3DA39i4obKXADMhzJW%3D6dyGq-C1aGfb%2BjUy9XvxwYA%40mail.gmail.com

Regards,
Vignesh



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Aleksander Alekseev
Дата:
Сообщение: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Следующее
От: Aleksander Alekseev
Дата:
Сообщение: Re: gcc 13 warnings