Обсуждение: Questions on logical replication

Поиск
Список
Период
Сортировка

Questions on logical replication

От
Koen De Groote
Дата:
I recently read the entire documentation on logical replication, but am left with a question on the buildup of WAL


It is written: " When dropping a subscription, the remote host is not reachable. In that case, disassociate the slot from the subscription using ALTER SUBSCRIPTION before attempting to drop the subscription. If the remote database instance no longer exists, no further action is then necessary. If, however, the remote database instance is just unreachable, the replication slot (and any still remaining table synchronization slots) should then be dropped manually; otherwise it/they would continue to reserve WAL and might eventually cause the disk to fill up. Such cases should be carefully investigated."


Assuming a situation where I add tables 1 at a time to the publisher, and refresh the subscription every time.

What happens if I shut down the subscriber database for a while? The subscription isn't dropped, so am I reading it right that the disk on the publisher will slowly be filling up with WAL? Isn't that always the case if wall is enabled?

This "cause disk to fill up" warning is quite concerning, and I'd like to understand what could cause it and how likely it is? I thought logical replication uses WAL by default, so doesn't that mean there has to be a log of changes kept anyhow? Even if the WAL isn't written to disk by an "archive_command"?

Regards,
Koen De Groote

Re: Questions on logical replication

От
Koen De Groote
Дата:

" Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected. "

Am I to understand that a subscription is considered that same as a standby, in this context?

On Wed, Jun 5, 2024 at 12:55 AM Koen De Groote <kdg.dev@gmail.com> wrote:
I recently read the entire documentation on logical replication, but am left with a question on the buildup of WAL


It is written: " When dropping a subscription, the remote host is not reachable. In that case, disassociate the slot from the subscription using ALTER SUBSCRIPTION before attempting to drop the subscription. If the remote database instance no longer exists, no further action is then necessary. If, however, the remote database instance is just unreachable, the replication slot (and any still remaining table synchronization slots) should then be dropped manually; otherwise it/they would continue to reserve WAL and might eventually cause the disk to fill up. Such cases should be carefully investigated."


Assuming a situation where I add tables 1 at a time to the publisher, and refresh the subscription every time.

What happens if I shut down the subscriber database for a while? The subscription isn't dropped, so am I reading it right that the disk on the publisher will slowly be filling up with WAL? Isn't that always the case if wall is enabled?

This "cause disk to fill up" warning is quite concerning, and I'd like to understand what could cause it and how likely it is? I thought logical replication uses WAL by default, so doesn't that mean there has to be a log of changes kept anyhow? Even if the WAL isn't written to disk by an "archive_command"?

Regards,
Koen De Groote

Re: Questions on logical replication

От
Adrian Klaver
Дата:
On 6/4/24 15:55, Koen De Groote wrote:
> I recently read the entire documentation on logical replication, but am 
> left with a question on the buildup of WAL
> 
> On this page: 
> https://www.postgresql.org/docs/current/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT
<https://www.postgresql.org/docs/current/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT>
> 
> It is written: " When dropping a subscription, the remote host is not 
> reachable. In that case, disassociate the slot from the subscription 
> using |ALTER SUBSCRIPTION| before attempting to drop the subscription. 
> If the remote database instance no longer exists, no further action is 
> then necessary. If, however, the remote database instance is just 
> unreachable, the replication slot (and any still remaining table 
> synchronization slots) should then be dropped manually; otherwise 
> it/they would continue to reserve WAL and might eventually cause the 
> disk to fill up. Such cases should be carefully investigated."
> 
> 
> Assuming a situation where I add tables 1 at a time to the publisher, 
> and refresh the subscription every time.
> 
> What happens if I shut down the subscriber database for a while? The 
> subscription isn't dropped, so am I reading it right that the disk on 
> the publisher will slowly be filling up with WAL? Isn't that always the 
> case if wall is enabled?

https://www.postgresql.org/docs/current/wal-configuration.html

"Checkpoints are points in the sequence of transactions at which it is 
guaranteed that the heap and index data files have been updated with all 
information written before that checkpoint. At checkpoint time, all 
dirty data pages are flushed to disk and a special checkpoint record is 
written to the WAL file. (The change records were previously flushed to 
the WAL files.) In the event of a crash, the crash recovery procedure 
looks at the latest checkpoint record to determine the point in the WAL 
(known as the redo record) from which it should start the REDO 
operation. Any changes made to data files before that point are 
guaranteed to be already on disk. Hence, after a checkpoint, WAL 
segments preceding the one containing the redo record are no longer 
needed and can be recycled or removed. (When WAL archiving is being 
done, the WAL segments must be archived before being recycled or removed.)"

> 
> This "cause disk to fill up" warning is quite concerning, and I'd like 
> to understand what could cause it and how likely it is? I thought 
> logical replication uses WAL by default, so doesn't that mean there has 
> to be a log of changes kept anyhow? Even if the WAL isn't written to 
> disk by an "archive_command"?

https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS

"Replication slots provide an automated way to ensure that the primary 
does not remove WAL segments until they have been received by all 
standbys, and that the primary does not remove rows which could cause a 
recovery conflict even when the standby is disconnected."

When you set up logical replication you are 'asking' via the replication 
slot that WAL records be kept on the publisher until the subscriber 
retrieves them.

> 
> Regards,
> Koen De Groote

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: Questions on logical replication

От
Koen De Groote
Дата:
https://www.postgresql.org/docs/current/wal-configuration.html

"Checkpoints are points in the sequence of transactions at which it is
guaranteed that the heap and index data files have been updated with all
information written before that checkpoint. At checkpoint time, all
dirty data pages are flushed to disk and a special checkpoint record is
written to the WAL file. (The change records were previously flushed to
the WAL files.) In the event of a crash, the crash recovery procedure
looks at the latest checkpoint record to determine the point in the WAL
(known as the redo record) from which it should start the REDO
operation. Any changes made to data files before that point are
guaranteed to be already on disk. Hence, after a checkpoint, WAL
segments preceding the one containing the redo record are no longer
needed and can be recycled or removed. (When WAL archiving is being
done, the WAL segments must be archived before being recycled or removed.)"

And this is the same for logical replication and physical replication, I take it.

Thus, if a leader has a standby of the same version, and meanwhile logical replication is being done to a newer version, both those replications are taken into account, is that correct?


 When you set up logical replication you are 'asking' via the replication
slot that WAL records be kept on the publisher until the subscriber
retrieves them.

And if it cannot sync them, due to connectivity loss for instance, the WAL records will not be removed, then?

Regards,
Koen De Groote


On Wed, Jun 5, 2024 at 1:05 AM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 6/4/24 15:55, Koen De Groote wrote:
> I recently read the entire documentation on logical replication, but am
> left with a question on the buildup of WAL
>
> On this page:
> https://www.postgresql.org/docs/current/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT <https://www.postgresql.org/docs/current/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT>
>
> It is written: " When dropping a subscription, the remote host is not
> reachable. In that case, disassociate the slot from the subscription
> using |ALTER SUBSCRIPTION| before attempting to drop the subscription.
> If the remote database instance no longer exists, no further action is
> then necessary. If, however, the remote database instance is just
> unreachable, the replication slot (and any still remaining table
> synchronization slots) should then be dropped manually; otherwise
> it/they would continue to reserve WAL and might eventually cause the
> disk to fill up. Such cases should be carefully investigated."
>
>
> Assuming a situation where I add tables 1 at a time to the publisher,
> and refresh the subscription every time.
>
> What happens if I shut down the subscriber database for a while? The
> subscription isn't dropped, so am I reading it right that the disk on
> the publisher will slowly be filling up with WAL? Isn't that always the
> case if wall is enabled?

https://www.postgresql.org/docs/current/wal-configuration.html

"Checkpoints are points in the sequence of transactions at which it is
guaranteed that the heap and index data files have been updated with all
information written before that checkpoint. At checkpoint time, all
dirty data pages are flushed to disk and a special checkpoint record is
written to the WAL file. (The change records were previously flushed to
the WAL files.) In the event of a crash, the crash recovery procedure
looks at the latest checkpoint record to determine the point in the WAL
(known as the redo record) from which it should start the REDO
operation. Any changes made to data files before that point are
guaranteed to be already on disk. Hence, after a checkpoint, WAL
segments preceding the one containing the redo record are no longer
needed and can be recycled or removed. (When WAL archiving is being
done, the WAL segments must be archived before being recycled or removed.)"

>
> This "cause disk to fill up" warning is quite concerning, and I'd like
> to understand what could cause it and how likely it is? I thought
> logical replication uses WAL by default, so doesn't that mean there has
> to be a log of changes kept anyhow? Even if the WAL isn't written to
> disk by an "archive_command"?

https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS

"Replication slots provide an automated way to ensure that the primary
does not remove WAL segments until they have been received by all
standbys, and that the primary does not remove rows which could cause a
recovery conflict even when the standby is disconnected."

When you set up logical replication you are 'asking' via the replication
slot that WAL records be kept on the publisher until the subscriber
retrieves them.

>
> Regards,
> Koen De Groote

--
Adrian Klaver
adrian.klaver@aklaver.com

Re: Questions on logical replication

От
Adrian Klaver
Дата:
On 6/5/24 14:54, Koen De Groote wrote:
>     https://www.postgresql.org/docs/current/wal-configuration.html
>     <https://www.postgresql.org/docs/current/wal-configuration.html>
> 
>     "Checkpoints are points in the sequence of transactions at which it is
>     guaranteed that the heap and index data files have been updated with
>     all
>     information written before that checkpoint. At checkpoint time, all
>     dirty data pages are flushed to disk and a special checkpoint record is
>     written to the WAL file. (The change records were previously flushed to
>     the WAL files.) In the event of a crash, the crash recovery procedure
>     looks at the latest checkpoint record to determine the point in the WAL
>     (known as the redo record) from which it should start the REDO
>     operation. Any changes made to data files before that point are
>     guaranteed to be already on disk. Hence, after a checkpoint, WAL
>     segments preceding the one containing the redo record are no longer
>     needed and can be recycled or removed. (When WAL archiving is being
>     done, the WAL segments must be archived before being recycled or
>     removed.)"
> 
> 
> And this is the same for logical replication and physical replication, I 
> take it.

High level explanation, both physical and logical replication use the 
WAL files as the starting point. When the recycling is done is dependent 
on various factors. My suggestion would be to read through the below to 
get a better idea of what is going. There is a lot to cover, but if you 
really want to understand it you will need to go through it.

Physical replication

https://www.postgresql.org/docs/current/high-availability.html

27.2.5. Streaming Replication
27.2.6. Replication Slots

Logical replication

https://www.postgresql.org/docs/current/logical-replication.html

WAL

https://www.postgresql.org/docs/current/wal.html



> 
> Thus, if a leader has a standby of the same version, and meanwhile 
> logical replication is being done to a newer version, both those 
> replications are taken into account, is that correct?

Yes, see links above.


> And if it cannot sync them, due to connectivity loss for instance, the 
> WAL records will not be removed, then?

Depends on the type of replication being done. It is possible for 
physical replication to have WAL records removed that are still needed 
downstream.

From

https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION

"If you use streaming replication without file-based continuous 
archiving, the server might recycle old WAL segments before the standby 
has received them. If this occurs, the standby will need to be 
reinitialized from a new base backup. You can avoid this by setting 
wal_keep_size to a value large enough to ensure that WAL segments are 
not recycled too early, or by configuring a replication slot for the 
standby. If you set up a WAL archive that's accessible from the standby, 
these solutions are not required, since the standby can always use the 
archive to catch up provided it retains enough segments."

This is why it is good idea to go through the links I posted above.

> 
> Regards,
> Koen De Groote
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: Questions on logical replication

От
Koen De Groote
Дата:
I'll give them a read, though it might take a few weekends

Meanwhile, this seems to be what I'm looking for:


" Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected."

I'm reading that as: "if there is a replication slot, if the standby is disconnected, WAL is kept"

And if we know WAL is kept in the "pg_wal" directory, that sounds like it could slowly but surely fill up disk space.


But again, I'll give them a read. I've read all of logical replication already, and I feel like I didn't get my answer there.

Thanks for the help


Regards,
Koen De Groote

On Thu, Jun 6, 2024 at 12:19 AM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 6/5/24 14:54, Koen De Groote wrote:
>     https://www.postgresql.org/docs/current/wal-configuration.html
>     <https://www.postgresql.org/docs/current/wal-configuration.html>
>
>     "Checkpoints are points in the sequence of transactions at which it is
>     guaranteed that the heap and index data files have been updated with
>     all
>     information written before that checkpoint. At checkpoint time, all
>     dirty data pages are flushed to disk and a special checkpoint record is
>     written to the WAL file. (The change records were previously flushed to
>     the WAL files.) In the event of a crash, the crash recovery procedure
>     looks at the latest checkpoint record to determine the point in the WAL
>     (known as the redo record) from which it should start the REDO
>     operation. Any changes made to data files before that point are
>     guaranteed to be already on disk. Hence, after a checkpoint, WAL
>     segments preceding the one containing the redo record are no longer
>     needed and can be recycled or removed. (When WAL archiving is being
>     done, the WAL segments must be archived before being recycled or
>     removed.)"
>
>
> And this is the same for logical replication and physical replication, I
> take it.

High level explanation, both physical and logical replication use the
WAL files as the starting point. When the recycling is done is dependent
on various factors. My suggestion would be to read through the below to
get a better idea of what is going. There is a lot to cover, but if you
really want to understand it you will need to go through it.

Physical replication

https://www.postgresql.org/docs/current/high-availability.html

27.2.5. Streaming Replication
27.2.6. Replication Slots

Logical replication

https://www.postgresql.org/docs/current/logical-replication.html

WAL

https://www.postgresql.org/docs/current/wal.html



>
> Thus, if a leader has a standby of the same version, and meanwhile
> logical replication is being done to a newer version, both those
> replications are taken into account, is that correct?

Yes, see links above.


> And if it cannot sync them, due to connectivity loss for instance, the
> WAL records will not be removed, then?

Depends on the type of replication being done. It is possible for
physical replication to have WAL records removed that are still needed
downstream.

From

https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION

"If you use streaming replication without file-based continuous
archiving, the server might recycle old WAL segments before the standby
has received them. If this occurs, the standby will need to be
reinitialized from a new base backup. You can avoid this by setting
wal_keep_size to a value large enough to ensure that WAL segments are
not recycled too early, or by configuring a replication slot for the
standby. If you set up a WAL archive that's accessible from the standby,
these solutions are not required, since the standby can always use the
archive to catch up provided it retains enough segments."

This is why it is good idea to go through the links I posted above.

>
> Regards,
> Koen De Groote
>


--
Adrian Klaver
adrian.klaver@aklaver.com

Re: Questions on logical replication

От
Kashif Zeeshan
Дата:


On Fri, Jun 7, 2024 at 3:19 AM Koen De Groote <kdg.dev@gmail.com> wrote:
I'll give them a read, though it might take a few weekends

Meanwhile, this seems to be what I'm looking for:


" Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected."

I'm reading that as: "if there is a replication slot, if the standby is disconnected, WAL is kept"

And if we know WAL is kept in the "pg_wal" directory, that sounds like it could slowly but surely fill up disk space.

Hi

Yes that is a consideration with logical replication but the possible cast out weight the benefit.
The kept WAL file size will only increase if the standby is offline.

Regards
Kashif Zeeshan
Bitnine Global 


But again, I'll give them a read. I've read all of logical replication already, and I feel like I didn't get my answer there.

Thanks for the help


Regards,
Koen De Groote

On Thu, Jun 6, 2024 at 12:19 AM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 6/5/24 14:54, Koen De Groote wrote:
>     https://www.postgresql.org/docs/current/wal-configuration.html
>     <https://www.postgresql.org/docs/current/wal-configuration.html>
>
>     "Checkpoints are points in the sequence of transactions at which it is
>     guaranteed that the heap and index data files have been updated with
>     all
>     information written before that checkpoint. At checkpoint time, all
>     dirty data pages are flushed to disk and a special checkpoint record is
>     written to the WAL file. (The change records were previously flushed to
>     the WAL files.) In the event of a crash, the crash recovery procedure
>     looks at the latest checkpoint record to determine the point in the WAL
>     (known as the redo record) from which it should start the REDO
>     operation. Any changes made to data files before that point are
>     guaranteed to be already on disk. Hence, after a checkpoint, WAL
>     segments preceding the one containing the redo record are no longer
>     needed and can be recycled or removed. (When WAL archiving is being
>     done, the WAL segments must be archived before being recycled or
>     removed.)"
>
>
> And this is the same for logical replication and physical replication, I
> take it.

High level explanation, both physical and logical replication use the
WAL files as the starting point. When the recycling is done is dependent
on various factors. My suggestion would be to read through the below to
get a better idea of what is going. There is a lot to cover, but if you
really want to understand it you will need to go through it.

Physical replication

https://www.postgresql.org/docs/current/high-availability.html

27.2.5. Streaming Replication
27.2.6. Replication Slots

Logical replication

https://www.postgresql.org/docs/current/logical-replication.html

WAL

https://www.postgresql.org/docs/current/wal.html



>
> Thus, if a leader has a standby of the same version, and meanwhile
> logical replication is being done to a newer version, both those
> replications are taken into account, is that correct?

Yes, see links above.


> And if it cannot sync them, due to connectivity loss for instance, the
> WAL records will not be removed, then?

Depends on the type of replication being done. It is possible for
physical replication to have WAL records removed that are still needed
downstream.

From

https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION

"If you use streaming replication without file-based continuous
archiving, the server might recycle old WAL segments before the standby
has received them. If this occurs, the standby will need to be
reinitialized from a new base backup. You can avoid this by setting
wal_keep_size to a value large enough to ensure that WAL segments are
not recycled too early, or by configuring a replication slot for the
standby. If you set up a WAL archive that's accessible from the standby,
these solutions are not required, since the standby can always use the
archive to catch up provided it retains enough segments."

This is why it is good idea to go through the links I posted above.

>
> Regards,
> Koen De Groote
>


--
Adrian Klaver
adrian.klaver@aklaver.com

Re: Questions on logical replication

От
Adrian Klaver
Дата:
On 6/6/24 15:19, Koen De Groote wrote:
> I'll give them a read, though it might take a few weekends
> 
> Meanwhile, this seems to be what I'm looking for:
> 
>  From 
> https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS
<https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS>
> 
> " Replication slots provide an automated way to ensure that the primary 
> does not remove WAL segments until they have been received by all 
> standbys, and that the primary does not remove rows which could cause a 
> recovery conflict 
> <https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-CONFLICT> even when the standby is
disconnected."
> 
> I'm reading that as: "if there is a replication slot, if the standby is 
> disconnected, WAL is kept"
> 
> And if we know WAL is kept in the "pg_wal" directory, that sounds like 
> it could slowly but surely fill up disk space.
> 
> 
> But again, I'll give them a read. I've read all of logical replication 
> already, and I feel like I didn't get my answer there.

It would be a good idea to provide an a fairly specific outline of what 
you are trying to achieve, then it would be easier for folks to offer 
suggestions on what to do or not to do.

> 
> Thanks for the help
> 
> 
> Regards,
> Koen De Groote

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: Questions on logical replication

От
Koen De Groote
Дата:
What I'm trying to do is upgrade a PG11 database to PG16, using logical replication.

The PG11 has an active and a standby, there are a handful of databases. On particular one has a few tables just over 100GB, then a few 100 tables near 1GB.

What I'd do is start a publication with no tables and add them 1 at a time, refreshing subscription each time.

This might take a long time, so my main questions relate to potential network issues or various situations where the instance receiving the logical replication, suddenly stop being able to receive.

Resyncing, and the effects of WAL buildup, are my main concern.

Accidentally sent a mail to only your email, sorry for that.

Regards,
Koen De Groote

On Fri, Jun 7, 2024 at 5:15 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 6/6/24 15:19, Koen De Groote wrote:
> I'll give them a read, though it might take a few weekends
>
> Meanwhile, this seems to be what I'm looking for:
>
>  From
> https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS <https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS>
>
> " Replication slots provide an automated way to ensure that the primary
> does not remove WAL segments until they have been received by all
> standbys, and that the primary does not remove rows which could cause a
> recovery conflict
> <https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-CONFLICT> even when the standby is disconnected."
>
> I'm reading that as: "if there is a replication slot, if the standby is
> disconnected, WAL is kept"
>
> And if we know WAL is kept in the "pg_wal" directory, that sounds like
> it could slowly but surely fill up disk space.
>
>
> But again, I'll give them a read. I've read all of logical replication
> already, and I feel like I didn't get my answer there.

It would be a good idea to provide an a fairly specific outline of what
you are trying to achieve, then it would be easier for folks to offer
suggestions on what to do or not to do.

>
> Thanks for the help
>
>
> Regards,
> Koen De Groote

--
Adrian Klaver
adrian.klaver@aklaver.com

Re: Questions on logical replication

От
Adrian Klaver
Дата:
On 6/8/24 10:40, Koen De Groote wrote:
> What I'm trying to do is upgrade a PG11 database to PG16, using logical 
> replication.

Have you looked at pg_upgrade?:

https://www.postgresql.org/docs/current/pgupgrade.html

> 
> The PG11 has an active and a standby, there are a handful of databases. 
> On particular one has a few tables just over 100GB, then a few 100 
> tables near 1GB.

1 GB each?

> 
> What I'd do is start a publication with no tables and add them 1 at a 
> time, refreshing subscription each time.
> 
> This might take a long time, so my main questions relate to potential 
> network issues or various situations where the instance receiving the 
> logical replication, suddenly stop being able to receive.
> 
> Resyncing, and the effects of WAL buildup, are my main concern.
> 
> Accidentally sent a mail to only your email, sorry for that.
> 
> Regards,
> Koen De Groote

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: Questions on logical replication

От
Justin
Дата:

On Sat, Jun 8, 2024 at 1:41 PM Koen De Groote <kdg.dev@gmail.com> wrote:
What I'm trying to do is upgrade a PG11 database to PG16, using logical replication.

The PG11 has an active and a standby, there are a handful of databases. On particular one has a few tables just over 100GB, then a few 100 tables near 1GB.

What I'd do is start a publication with no tables and add them 1 at a time, refreshing subscription each time.

This might take a long time, so my main questions relate to potential network issues or various situations where the instance receiving the logical replication, suddenly stop being able to receive.

Resyncing, and the effects of WAL buildup, are my main concern.

Accidentally sent a mail to only your email, sorry for that.

Regards,
Koen De Groote


This approach does not prevent WAL build up.

The WAL build up occurs during the initial sync worker once that table is synced the WAL is replayed and released.   The parent worker then become responsible for replaying the WAL for that table

The WAL build up is during the initial sync of the data by table NOT during the entire synce of all the tables that have been published.

For 1 gb table the initial sync will be very fast so I doubt any individual table will cause any significant WAL build up to put the publisher at risk of of crashing

Once a table becomes synced the main subscriber worker keeps the WAL replayed.  If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files. If there are missing replica identities the affected tables will have to be dropped from the publication and subscription refreshed.  The WAL  file is already written with incorrect information so the table on the subscriber table is most likely not in recoverable state. 
 
I suggest confirming all tables have replica identities or primary keys before going any further.    With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and eariler.  PG 16 on the subsciber can use a different unique index that has NOT NULL for all participating columns if the publisher is using  Replicate Identity FULL on the published table

One must understand the above before deploying logical replication. 

Hope this helps

Re: Questions on logical replication

От
Koen De Groote
Дата:
> Have you looked at pg_upgrade?:

I have, but I want to keep downtime to a minimum and from my understanding the switching of a fully synced logical replica only requires updating your sequences. Which should be possible in less than 60 seconds.

> 1 GB each?

Yes, each. Roughly around there.


On Sat, Jun 8, 2024 at 7:46 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:
On 6/8/24 10:40, Koen De Groote wrote:
> What I'm trying to do is upgrade a PG11 database to PG16, using logical
> replication.

Have you looked at pg_upgrade?:

https://www.postgresql.org/docs/current/pgupgrade.html

>
> The PG11 has an active and a standby, there are a handful of databases.
> On particular one has a few tables just over 100GB, then a few 100
> tables near 1GB.

1 GB each?

>
> What I'd do is start a publication with no tables and add them 1 at a
> time, refreshing subscription each time.
>
> This might take a long time, so my main questions relate to potential
> network issues or various situations where the instance receiving the
> logical replication, suddenly stop being able to receive.
>
> Resyncing, and the effects of WAL buildup, are my main concern.
>
> Accidentally sent a mail to only your email, sorry for that.
>
> Regards,
> Koen De Groote

--
Adrian Klaver
adrian.klaver@aklaver.com

Re: Questions on logical replication

От
Koen De Groote
Дата:
> If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files

And also if the connection breaks, from what I understand, is that correct? Anything that stops the subscription, including disabling the subscription, is that right?

> I suggest confirming all tables have replica identities or primary keys before going any further.

Yes, I am aware of this. I made me a small script that prints which tables I have added to the publication and are done syncing, and which are currently not being replicated.

> With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and earlier.

I'm also aware of this. My plan is to create a publication with no tables, and add them 1 by 1, refreshing the subscriber each time.

I'm not planning on using "REPLICA IDENTITY FULL" anywhere.


On Sat, Jun 8, 2024 at 10:33 PM Justin <zzzzz.graf@gmail.com> wrote:

On Sat, Jun 8, 2024 at 1:41 PM Koen De Groote <kdg.dev@gmail.com> wrote:
What I'm trying to do is upgrade a PG11 database to PG16, using logical replication.

The PG11 has an active and a standby, there are a handful of databases. On particular one has a few tables just over 100GB, then a few 100 tables near 1GB.

What I'd do is start a publication with no tables and add them 1 at a time, refreshing subscription each time.

This might take a long time, so my main questions relate to potential network issues or various situations where the instance receiving the logical replication, suddenly stop being able to receive.

Resyncing, and the effects of WAL buildup, are my main concern.

Accidentally sent a mail to only your email, sorry for that.

Regards,
Koen De Groote


This approach does not prevent WAL build up.

The WAL build up occurs during the initial sync worker once that table is synced the WAL is replayed and released.   The parent worker then become responsible for replaying the WAL for that table

The WAL build up is during the initial sync of the data by table NOT during the entire synce of all the tables that have been published.

For 1 gb table the initial sync will be very fast so I doubt any individual table will cause any significant WAL build up to put the publisher at risk of of crashing

Once a table becomes synced the main subscriber worker keeps the WAL replayed.  If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files. If there are missing replica identities the affected tables will have to be dropped from the publication and subscription refreshed.  The WAL  file is already written with incorrect information so the table on the subscriber table is most likely not in recoverable state. 
 
I suggest confirming all tables have replica identities or primary keys before going any further.    With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and eariler.  PG 16 on the subsciber can use a different unique index that has NOT NULL for all participating columns if the publisher is using  Replicate Identity FULL on the published table

One must understand the above before deploying logical replication. 

Hope this helps

Re: Questions on logical replication

От
Justin
Дата:


On Tue, Jun 11, 2024 at 5:43 PM Koen De Groote <kdg.dev@gmail.com> wrote:
> If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files

And also if the connection breaks, from what I understand, is that correct? Anything that stops the subscription, including disabling the subscription, is that right?
 
Yes to all.... 


> I suggest confirming all tables have replica identities or primary keys before going any further.

Yes, I am aware of this. I made me a small script that prints which tables I have added to the publication and are done syncing, and which are currently not being replicated.
 

> With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and earlier.

I'm also aware of this. My plan is to create a publication with no tables, and add them 1 by 1, refreshing the subscriber each time.
 
Why?  what benefit does this provide you??   Add all the tables when creating the publication and be done with it...  I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplish 


I'm not planning on using "REPLICA IDENTITY FULL" anywhere.
Good 

Re: Questions on logical replication

От
Koen De Groote
Дата:
> Why?  what benefit does this provide you??   Add all the tables when creating the publication and be done with it...  I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplish

Adding all tables at once means adding the gigantic tables as well. Disk IO and Network traffic are a serious concern, increased CPU usage affecting queries of the live system, as well as transaction wraparound.

Initial sync can be a serious concern, depending on the size of the table.

Here's a nice guide where people did a logical replication upgrade, explaining why they did it this way: https://knock.app/blog/zero-downtime-postgres-upgrades

On Wed, Jun 12, 2024 at 7:01 PM Justin <zzzzz.graf@gmail.com> wrote:


On Tue, Jun 11, 2024 at 5:43 PM Koen De Groote <kdg.dev@gmail.com> wrote:
> If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files

And also if the connection breaks, from what I understand, is that correct? Anything that stops the subscription, including disabling the subscription, is that right?
 
Yes to all.... 


> I suggest confirming all tables have replica identities or primary keys before going any further.

Yes, I am aware of this. I made me a small script that prints which tables I have added to the publication and are done syncing, and which are currently not being replicated.
 

> With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and earlier.

I'm also aware of this. My plan is to create a publication with no tables, and add them 1 by 1, refreshing the subscriber each time.
 
Why?  what benefit does this provide you??   Add all the tables when creating the publication and be done with it...  I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplish 


I'm not planning on using "REPLICA IDENTITY FULL" anywhere.
Good 

Re: Questions on logical replication

От
Justin
Дата:


On Thu, Jun 13, 2024 at 6:01 AM Koen De Groote <kdg.dev@gmail.com> wrote:
> Why?  what benefit does this provide you??   Add all the tables when creating the publication and be done with it...  I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplish

Adding all tables at once means adding the gigantic tables as well. Disk IO and Network traffic are a serious concern, increased CPU usage affecting queries of the live system, as well as transaction wraparound.

Initial sync can be a serious concern, depending on the size of the table.

The number of initial  sync workers can be controlled via max_sync_workers_per_subscription  
see https://www.postgresql.org/docs/current/logical-replication-config.html 

if you want to do one table at a time just set sync workers to 1.

If bandwidth is a problem  either from the disk or network, direct the network traffic from the subscriber through a proxy or firewall to throttle the network speed.  Slowing the copy will cause the WAL to build up  on the publisher

CPU load on the publisher is very low its actually hard to see it doing anything as its just reading the disk, streaming it to the subscriber..   

For large tables with lots of indexes for the copy to complete as fast as possible to prevent WAL build up, drop indexes.  For me the WAL build up has only been an issue when dealing with multi-TB sized tables when it takes several days to copy the data  for one table.

One trick is to remove all the indexes during the initial sync except for the primary key so the subscriber has less work to do.


Here's a nice guide where people did a logical replication upgrade, explaining why they did it this way: https://knock.app/blog/zero-downtime-postgres-upgrades

The blog suggests overly complicated things.  only doing 100GB chunks of data at one time.  Maybe  if the publisher was scarce on resources or the table is multi-TB in size it requires days to weeks to copy...

If the publisher is so low on resources that Logical Replication is problematic  one can create a binary replica, promote it and convert it to logical replication skipping the initial sync.  Then upgrade that server. There is a minor outage required to convert a binary replica to a logical replica.  I've done it in under 30 seconds. 


 

On Wed, Jun 12, 2024 at 7:01 PM Justin <zzzzz.graf@gmail.com> wrote:


On Tue, Jun 11, 2024 at 5:43 PM Koen De Groote <kdg.dev@gmail.com> wrote:
> If there are any errors during the replay of WAL such as missing indexes for Replica Identities during an Update or Delete  this will cause the main subscriber worker slot on the publisher to start backing up WAL files

And also if the connection breaks, from what I understand, is that correct? Anything that stops the subscription, including disabling the subscription, is that right?
 
Yes to all.... 


> I suggest confirming all tables have replica identities or primary keys before going any further.

Yes, I am aware of this. I made me a small script that prints which tables I have added to the publication and are done syncing, and which are currently not being replicated.
 

> With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on the subscriber for PG 15 and earlier.

I'm also aware of this. My plan is to create a publication with no tables, and add them 1 by 1, refreshing the subscriber each time.
 
Why?  what benefit does this provide you??   Add all the tables when creating the publication and be done with it...  I get this when trying to understand how this all works on test boxes, but for production NO idea what you're trying to accomplish 


I'm not planning on using "REPLICA IDENTITY FULL" anywhere.
Good