Re: Potential data loss due to race condition during logical replication slot creation
От | Masahiko Sawada |
---|---|
Тема | Re: Potential data loss due to race condition during logical replication slot creation |
Дата | |
Msg-id | CAD21AoDzLY9vRpo+xb2qPtfn46ikiULPXDpT94sPyFH4GE8bYg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Potential data loss due to race condition during logical replication slot creation (Amit Kapila <amit.kapila16@gmail.com>) |
Ответы |
RE: Potential data loss due to race condition during logical replication slot creation
|
Список | pgsql-bugs |
On Mon, Mar 18, 2024 at 6:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote: > > If so, one idea to > achieve could be that we maintain the highest_running_xid while > serailizing the snapshot and then during restore if that > highest_running_xid is <= builder->initial_xmin_horizon, then we > ignore restoring the snapshot. We already have few such cases handled > in SnapBuildRestore(). I think that builder->initial_xmin_horizon could be older than highest_running_xid, for example, when there is a logical replication slot whose catalog_xmin is old. However, even in this case, we might need to ignore restoring the snapshot. For example, a slightly modified test case still can cause the same problem. The test case in the Kuroda-san's v2 patch: permutation "s0_init" "s0_begin" "s0_insert1" "s1_init" "s2_checkpoint" "s2_get_changes_slot0" "s0_insert2" "s0_commit" "s1_get_changes_slot0"\ "s1_get_changes_slot1" Modified-version test case (add "s0_insert1" between "s0_init" and "s0_begin"): permutation "s0_init" "s0_insert1" "s0_begin" "s0_insert1" "s1_init" "s2_checkpoint" "s2_get_changes_slot0" "s0_insert2" "s0_commit" "s1_get_changes_slot0\ " "s1_get_changes_slot1" Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
В списке pgsql-bugs по дате отправления: