Обсуждение: Backup solution over unreliable network

Поиск
Список
Период
Сортировка

Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
Hello,
we've been running our backup solution for the last 5 months to a second site which has an unreliable network
connection.We had problems with barman, since it doesn't support backup resume, also no 
 
option to disable the replication slot, in the sense, that it is better to sacrifice the backup rather than fill up the
primarywith WALs and bring the primary down. Another issue is now supporting 
 
entirely backing up from the secondary. With barman this is not possible, streaming (or archiving) must originate from
theprimary.So I want to ask two things here :
 
- Backing up to a remote site over an unreliable channel is a limited use case by itself, it is useful for local PITR
restoreson specific tables/data, or in case the whole primary suffers a disaster. 
 
Is there any other benefit that would justify building a solution for it?
- I have only read the best reviews about PgBackRest, can PgBackRest address those issues?

Thank you!

-- 
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Backup solution over unreliable network

От
Stephen Frost
Дата:
Greetings,

* Achilleas Mantzios (achill@matrix.gatewaynet.com) wrote:
> we've been running our backup solution for the last 5 months to a second
> site which has an unreliable network connection. We had problems with
> barman, since it doesn't support backup resume, also no option to disable
> the replication slot, in the sense, that it is better to sacrifice the
> backup rather than fill up the primary with WALs and bring the primary down.
> Another issue is now supporting entirely backing up from the secondary. With
> barman this is not possible, streaming (or archiving) must originate from
> the primary.So I want to ask two things here :
> - Backing up to a remote site over an unreliable channel is a limited use
> case by itself, it is useful for local PITR restores on specific
> tables/data, or in case the whole primary suffers a disaster. Is there any
> other benefit that would justify building a solution for it?

Please don't build your own solution, it's really quite difficult to get
backups done correctly.

> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?

Glad to hear you've read good reviews about pgbackrest.  As for
addressing these issues, pgbackrest has:

- Backup resume
- Max WAL lag (in other words, you can have it simply start throwing WAL
  away if it can't archive it, rather than allowing the primary to run
  out of disk space)
- Backup using the replica, primairly (note that this, currently,
  requires access to the primary, but the bulk of the data comes from
  the replica)
- Incremental/differential backup
- Parallel backup/resume and parallel archiving/fetching
- Backup verification- we checksum every file backed up and verify those
  checksums on a resume, and we make sure that every WAL file needed to
  restore the backup has made it into the archive.
- Delta restore

Which I believe covers most of the use-cases you brought up.

When we first implemented backup using the replica we had concerns
regarding doing a 'full' replica-based backup, and we didn't really see
there being a lot of demand for such a use-case (the replica has access
to the primary in general if it's a streaming replica, after all...),
but we might be open to revisiting that.

Thanks!

Stephen

Вложения

Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
On 30/11/18 2:06 μ.μ., Stephen Frost wrote:
> Greetings,
>
> * Achilleas Mantzios (achill@matrix.gatewaynet.com) wrote:
>> we've been running our backup solution for the last 5 months to a second
>> site which has an unreliable network connection. We had problems with
>> barman, since it doesn't support backup resume, also no option to disable
>> the replication slot, in the sense, that it is better to sacrifice the
>> backup rather than fill up the primary with WALs and bring the primary down.
>> Another issue is now supporting entirely backing up from the secondary. With
>> barman this is not possible, streaming (or archiving) must originate from
>> the primary.So I want to ask two things here :
>> - Backing up to a remote site over an unreliable channel is a limited use
>> case by itself, it is useful for local PITR restores on specific
>> tables/data, or in case the whole primary suffers a disaster. Is there any
>> other benefit that would justify building a solution for it?
> Please don't build your own solution, it's really quite difficult to get
> backups done correctly.

By "building" I meant setting up, nothing fancier :)

>
>> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
> Glad to hear you've read good reviews about pgbackrest.  As for
> addressing these issues, pgbackrest has:
>
> - Backup resume
> - Max WAL lag (in other words, you can have it simply start throwing WAL
>    away if it can't archive it, rather than allowing the primary to run
>    out of disk space)

This is just superb! In our case we had the following architecture (now barman is defunct) :

Primary (consistent snapshots with pg_start/stop_backup)+ --> reliable net (archive_command via rsync) --> WAL
repository
    | (async streaming replication)
    | (reliable net)
    V
Standby --> unreliable net (barman via method rsync + barman streaming from standby ***) --> remote cloud provider
site(barman)

So Primary and Standby are in the same cloud provider over consistent (mostly) network, whereas the barman (remote
recovery)site communicates over internet. We would like to keep the old 
 
functionality (or even add a new PgBackRest node in the main cloud provider, so the question is : is there a way for
archive-pushto two different stanzas? Or delegate the archive-push to work from 
 
the Standby ?

*** newer barman docs (2.5) say this is not supported (wasn't so clear in 2.4)



> - Backup using the replica, primairly (note that this, currently,
>    requires access to the primary, but the bulk of the data comes from
>    the replica)
> - Incremental/differential backup
> - Parallel backup/resume and parallel archiving/fetching
> - Backup verification- we checksum every file backed up and verify those
>    checksums on a resume, and we make sure that every WAL file needed to
>    restore the backup has made it into the archive.
> - Delta restore
>
> Which I believe covers most of the use-cases you brought up.
>
> When we first implemented backup using the replica we had concerns
> regarding doing a 'full' replica-based backup, and we didn't really see
> there being a lot of demand for such a use-case (the replica has access
> to the primary in general if it's a streaming replica, after all...),
> but we might be open to revisiting that.

Thank you a lot! We'll definitely consider PgBackRest.

> Thanks!
>
> Stephen


-- 
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Backup solution over unreliable network

От
Stephen Frost
Дата:
Greetings,

* Achilleas Mantzios (achill@matrix.gatewaynet.com) wrote:
> On 30/11/18 2:06 μ.μ., Stephen Frost wrote:
> >>- I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
> >Glad to hear you've read good reviews about pgbackrest.  As for
> >addressing these issues, pgbackrest has:
> >
> >- Backup resume
> >- Max WAL lag (in other words, you can have it simply start throwing WAL
> >   away if it can't archive it, rather than allowing the primary to run
> >   out of disk space)
>
> This is just superb! In our case we had the following architecture (now barman is defunct) :
>
> Primary (consistent snapshots with pg_start/stop_backup)+ --> reliable net (archive_command via rsync) --> WAL
repository
>    | (async streaming replication)
>    | (reliable net)
>    V
> Standby --> unreliable net (barman via method rsync + barman streaming from standby ***) --> remote cloud provider
site(barman)
>
> So Primary and Standby are in the same cloud provider over consistent
> (mostly) network, whereas the barman (remote recovery) site communicates
> over internet. We would like to keep the old functionality (or even add a
> new PgBackRest node in the main cloud provider, so the question is : is
> there a way for archive-push to two different stanzas? Or delegate the
> archive-push to work from the Standby ?

We've had a few folks using pgbackrest to push to two stanzas by way of
basically doing 'pgbackrest --stanza=a archive-push && pgbackrest
--stanza=b archive-push' and with that it does work, and you could
combine that with the max WAL setting, potentially, but it's not a
solution that I'm really a fan of.  That's definitely a use-case we've
been thinking about though and have plans to support in the future,
but there are other things we're tackling now and so multi-repo hasn't
been a priority.

We've also considered supporting archive-mode=always and being able to
have the standby also push WAL and while we may support that in the
future, I'd say it's farther down on the list than multi-repo support.
As I recall, David Steele also had some specific technical concerns
around how to handle two systems pushing into the same WAL archive.
Having archive-mode=always be allowed if it's going to an independent
repo is an interesting thought though and might be simpler to do.

Thanks!

Stephen

Вложения

Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
Hello Stephen!

On 30/11/18 5:29 μ.μ., Stephen Frost wrote:
> Greetings,
>
> * Achilleas Mantzios (achill@matrix.gatewaynet.com) wrote:
>> On 30/11/18 2:06 μ.μ., Stephen Frost wrote:
>>>> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
>>> Glad to hear you've read good reviews about pgbackrest.  As for
>>> addressing these issues, pgbackrest has:
>>>
>>> - Backup resume
>>> - Max WAL lag (in other words, you can have it simply start throwing WAL
>>>    away if it can't archive it, rather than allowing the primary to run
>>>    out of disk space)
>> This is just superb! In our case we had the following architecture (now barman is defunct) :
>>
>> Primary (consistent snapshots with pg_start/stop_backup)+ --> reliable net (archive_command via rsync) --> WAL
repository
>>     | (async streaming replication)
>>     | (reliable net)
>>     V
>> Standby --> unreliable net (barman via method rsync + barman streaming from standby ***) --> remote cloud provider
site(barman)
>>
>> So Primary and Standby are in the same cloud provider over consistent
>> (mostly) network, whereas the barman (remote recovery) site communicates
>> over internet. We would like to keep the old functionality (or even add a
>> new PgBackRest node in the main cloud provider, so the question is : is
>> there a way for archive-push to two different stanzas? Or delegate the
>> archive-push to work from the Standby ?
> We've had a few folks using pgbackrest to push to two stanzas by way of
> basically doing 'pgbackrest --stanza=a archive-push && pgbackrest
> --stanza=b archive-push' and with that it does work, and you could
> combine that with the max WAL setting, potentially, but it's not a
> solution that I'm really a fan of.  That's definitely a use-case we've
> been thinking about though and have plans to support in the future,
> but there are other things we're tackling now and so multi-repo hasn't
> been a priority.

If we called with e.g. --archive-push-queue-max=50G for the unreliable stanza and let to default for the reliable
stanzathat would be OK, I guess. Yes I understand you'd like a more systematic 
 
approach to the problem, but for the moment do you see any potential risk in doing what you described?

>
> We've also considered supporting archive-mode=always and being able to
> have the standby also push WAL and while we may support that in the
> future, I'd say it's farther down on the list than multi-repo support.
> As I recall, David Steele also had some specific technical concerns
> around how to handle two systems pushing into the same WAL archive.

I recall with barman 2.4 and postgresql 10, I had absolutely no problem receiving wal stream from the secondary and WAL
archivesfrom the primary, AS LONG AS there was no exclusive 
 
pg_start/stop_backup happening on the primary. The moment pg_start_backup (exclusive) started on the primary, the very
samebarman started to complain about errors in duplicates IIRC, which means the 
 
WALs with same name were different. The WAL themselves were identical except some bytes on the header (which I guess
haveto do with exclusive backup).
 

> Having archive-mode=always be allowed if it's going to an independent
> repo is an interesting thought though and might be simpler to do.

Yes the idea is to protect the primary and do all stuff to the secondary.

> Thanks!

Thank you!!

>
> Stephen


-- 
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Backup solution over unreliable network

От
David Steele
Дата:
On 11/30/18 10:29 AM, Stephen Frost wrote:
> * Achilleas Mantzios (achill@matrix.gatewaynet.com) wrote:
>> On 30/11/18 2:06 μ.μ., Stephen Frost wrote:
>>>> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
>>> Glad to hear you've read good reviews about pgbackrest.  As for

<...>

> We've also considered supporting archive-mode=always and being able to
> have the standby also push WAL and while we may support that in the
> future, I'd say it's farther down on the list than multi-repo support.
> As I recall, David Steele also had some specific technical concerns
> around how to handle two systems pushing into the same WAL archive.
> Having archive-mode=always be allowed if it's going to an independent
> repo is an interesting thought though and might be simpler to do.

The issue here is ensuring that only one system is writing to the 
repository at a time.  This is easy enough if there is a dedicated 
repo-host but is much harder if the repo is on something like S3 or NFS.

Using independent repos might work, but we'd need a way to ensure the 
configuration doesn't get broken.

Regards,
-- 
-David
david@pgmasters.net


Re: Backup solution over unreliable network

От
David Steele
Дата:
On 11/30/18 11:24 AM, Achilleas Mantzios wrote:
> On 30/11/18 5:29 μ.μ., Stephen Frost wrote:

>> We've had a few folks using pgbackrest to push to two stanzas by way of
>> basically doing 'pgbackrest --stanza=a archive-push && pgbackrest
>> --stanza=b archive-push' and with that it does work, and you could
>> combine that with the max WAL setting, potentially, but it's not a
>> solution that I'm really a fan of.  That's definitely a use-case we've
>> been thinking about though and have plans to support in the future,
>> but there are other things we're tackling now and so multi-repo hasn't
>> been a priority.
> 
> If we called with e.g. --archive-push-queue-max=50G for the unreliable 
> stanza and let to default for the reliable stanza that would be OK, I 
> guess. Yes I understand you'd like a more systematic approach to the 
> problem, but for the moment do you see any potential risk in doing what 
> you described?

Multiple stanzas are tricky to configure if async archiving is in use, 
otherwise it is relatively straightforward.  You just need two 
configuration files and each archive command will need one explicitly 
configured (--config).

If async archiving is enabled then each stanza will also need a separate 
spool directory.  This configuration has never been tested and I 
recommend against it.

>> We've also considered supporting archive-mode=always and being able to
>> have the standby also push WAL and while we may support that in the
>> future, I'd say it's farther down on the list than multi-repo support.
>> As I recall, David Steele also had some specific technical concerns
>> around how to handle two systems pushing into the same WAL archive.
> 
> I recall with barman 2.4 and postgresql 10, I had absolutely no problem 
> receiving wal stream from the secondary and WAL archives from the 
> primary, AS LONG AS there was no exclusive pg_start/stop_backup 
> happening on the primary. The moment pg_start_backup (exclusive) started 
> on the primary, the very same barman started to complain about errors in 
> duplicates IIRC, which means the WALs with same name were different. The 
> WAL themselves were identical except some bytes on the header (which I 
> guess have to do with exclusive backup).

I don't think this is because of the exclusive backup, but I'm not sure 
what is happening.  There are a few scenarios where WAL files may not be 
binary equal between the primary and standby but this isn't one that I 
know of.

>> Having archive-mode=always be allowed if it's going to an independent
>> repo is an interesting thought though and might be simpler to do.

pgBackRest currently requires some files and all WAL to be sent from the 
primary even when doing backup from standby.  We may improve this in the 
future but it's not on the road map right now.

Regards,
-- 
-David
david@pgmasters.net


Re: Backup solution over unreliable network

От
Evan Bauer
Дата:
Achilleas,

I may be over-simplifying your situation, but have you considered breaking the problem into two pieces?  First backing
upto a local drive and then using rsync to move those files over the unreliable network to the remote site. 

Like other who have responded, I can heartily recommend pgbackrest.  But if the network stinks, then I’d break the
problemin two and leave PostgreSQL out of the network equation. 

Cheers,

Evan

Sent from my iPhone

> On Nov 30, 2018, at 05:17, Achilleas Mantzios <achill@matrix.gatewaynet.com> wrote:
>
> Hello,
> we've been running our backup solution for the last 5 months to a second site which has an unreliable network
connection.We had problems with barman, since it doesn't support backup resume, also no option to disable the
replicationslot, in the sense, that it is better to sacrifice the backup rather than fill up the primary with WALs and
bringthe primary down. Another issue is now supporting entirely backing up from the secondary. With barman this is not
possible,streaming (or archiving) must originate from the primary.So I want to ask two things here : 
> - Backing up to a remote site over an unreliable channel is a limited use case by itself, it is useful for local PITR
restoreson specific tables/data, or in case the whole primary suffers a disaster. Is there any other benefit that would
justifybuilding a solution for it? 
> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
>
> Thank you!
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>



Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
On 30/11/18 8:22 μ.μ., Evan Bauer wrote:
> Achilleas,
>
> I may be over-simplifying your situation, but have you considered breaking the problem into two pieces?  First
backingup to a local drive and then using rsync to move those files over the unreliable network to the remote site.
 
>
> Like other who have responded, I can heartily recommend pgbackrest.  But if the network stinks, then I’d break the
problemin two and leave PostgreSQL out of the network equation.
 


Pretty good idea, but :

1) those rsync transfers have to be somehow db aware, otherwise lots of 
things might break, checksums, order of WALs, etc. There would be the 
need to write a whole solution and end up ... getting one of the 
established solutions

2) the rsync part would go basically unattended, meaning no smart 
software would be taking care of it, monitoring it, sending alerts, etc. 
Also we had our issues with rsync in the past with unreliable networks 
like getting error messages for which google returns one or no results 
(no pgsql stuff, just system scripts) . No wonder more and more PgSQL 
backup solutions move away from rsync.

Am I missing something or exaggerating?


>
> Cheers,
>
> Evan
>
> Sent from my iPhone
>
>> On Nov 30, 2018, at 05:17, Achilleas Mantzios <achill@matrix.gatewaynet.com> wrote:
>>
>> Hello,
>> we've been running our backup solution for the last 5 months to a second site which has an unreliable network
connection.We had problems with barman, since it doesn't support backup resume, also no option to disable the
replicationslot, in the sense, that it is better to sacrifice the backup rather than fill up the primary with WALs and
bringthe primary down. Another issue is now supporting entirely backing up from the secondary. With barman this is not
possible,streaming (or archiving) must originate from the primary.So I want to ask two things here :
 
>> - Backing up to a remote site over an unreliable channel is a limited use case by itself, it is useful for local
PITRrestores on specific tables/data, or in case the whole primary suffers a disaster. Is there any other benefit that
wouldjustify building a solution for it?
 
>> - I have only read the best reviews about PgBackRest, can PgBackRest address those issues?
>>
>> Thank you!
>>
>> -- 
>> Achilleas Mantzios
>> IT DEV Lead
>> IT DEPT
>> Dynacom Tankers Mgmt
>>
>>
>


Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
On 30/11/18 6:50 μ.μ., David Steele wrote:
> On 11/30/18 11:24 AM, Achilleas Mantzios wrote:
>> On 30/11/18 5:29 μ.μ., Stephen Frost wrote:
>
>>> We've had a few folks using pgbackrest to push to two stanzas by way of
>>> basically doing 'pgbackrest --stanza=a archive-push && pgbackrest
>>> --stanza=b archive-push' and with that it does work, and you could
>>> combine that with the max WAL setting, potentially, but it's not a
>>> solution that I'm really a fan of.  That's definitely a use-case we've
>>> been thinking about though and have plans to support in the future,
>>> but there are other things we're tackling now and so multi-repo hasn't
>>> been a priority.
>>
>> If we called with e.g. --archive-push-queue-max=50G for the 
>> unreliable stanza and let to default for the reliable stanza that 
>> would be OK, I guess. Yes I understand you'd like a more systematic 
>> approach to the problem, but for the moment do you see any potential 
>> risk in doing what you described?
>
> Multiple stanzas are tricky to configure if async archiving is in use, 
> otherwise it is relatively straightforward.  You just need two 
> configuration files and each archive command will need one explicitly 
> configured (--config).
>
> If async archiving is enabled then each stanza will also need a 
> separate spool directory.  This configuration has never been tested 
> and I recommend against it.


Thank you a lot, will start with non async !


>
>>> We've also considered supporting archive-mode=always and being able to
>>> have the standby also push WAL and while we may support that in the
>>> future, I'd say it's farther down on the list than multi-repo support.
>>> As I recall, David Steele also had some specific technical concerns
>>> around how to handle two systems pushing into the same WAL archive.
>>
>> I recall with barman 2.4 and postgresql 10, I had absolutely no 
>> problem receiving wal stream from the secondary and WAL archives from 
>> the primary, AS LONG AS there was no exclusive pg_start/stop_backup 
>> happening on the primary. The moment pg_start_backup (exclusive) 
>> started on the primary, the very same barman started to complain 
>> about errors in duplicates IIRC, which means the WALs with same name 
>> were different. The WAL themselves were identical except some bytes 
>> on the header (which I guess have to do with exclusive backup).
>
> I don't think this is because of the exclusive backup, but I'm not 
> sure what is happening.  There are a few scenarios where WAL files may 
> not be binary equal between the primary and standby but this isn't one 
> that I know of.
>
>>> Having archive-mode=always be allowed if it's going to an independent
>>> repo is an interesting thought though and might be simpler to do.
>
> pgBackRest currently requires some files and all WAL to be sent from 
> the primary even when doing backup from standby.  We may improve this 
> in the future but it's not on the road map right now.
>
> Regards,


Re: Backup solution over unreliable network

От
David Steele
Дата:
On 11/30/18 1:49 PM, Achilleas Mantzios wrote:
> 
> On 30/11/18 8:22 μ.μ., Evan Bauer wrote:
>> Achilleas,
>>
>> I may be over-simplifying your situation, but have you considered
>> breaking the problem into two pieces?  First backing up to a local
>> drive and then using rsync to move those files over the unreliable
>> network to the remote site.
>>
>> Like other who have responded, I can heartily recommend pgbackrest. 
>> But if the network stinks, then I’d break the problem in two and leave
>> PostgreSQL out of the network equation.
> 
> Pretty good idea, but :
> 
> 1) those rsync transfers have to be somehow db aware, otherwise lots of
> things might break, checksums, order of WALs, etc. There would be the
> need to write a whole solution and end up ... getting one of the
> established solutions

It's actually perfectly OK to rsync a pgBackRest repository.  We've
already done the hard work of interacting with the database and gotten
the backups into a format that can be rsync'd, backed up to tape, with
standard enterprise backups tools, etc.

It is common to backup the pgBackRest repo or individual backups (with
--archive-copy enabled) and we have not seen any issues.

BTW, in this context I expect local means in the same data center, not
on the database host.

> 2) the rsync part would go basically unattended, meaning no smart
> software would be taking care of it, monitoring it, sending alerts, etc.
> Also we had our issues with rsync in the past with unreliable networks
> like getting error messages for which google returns one or no results
> (no pgsql stuff, just system scripts) . No wonder more and more PgSQL
> backup solutions move away from rsync.

I agree that this is a concern -- every process needs to be monitored
and if you can avoid the extra step that would be best.  But if things
start getting complicated it might be the simpler option.

Regards,
-- 
-David
david@pgmasters.net


Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
On 1/12/18 1:12 π.μ., David Steele wrote:
> On 11/30/18 1:49 PM, Achilleas Mantzios wrote:
>> On 30/11/18 8:22 μ.μ., Evan Bauer wrote:
>>> Achilleas,
>>>
>>> I may be over-simplifying your situation, but have you considered
>>> breaking the problem into two pieces?  First backing up to a local
>>> drive and then using rsync to move those files over the unreliable
>>> network to the remote site.
>>>
>>> Like other who have responded, I can heartily recommend pgbackrest.
>>> But if the network stinks, then I’d break the problem in two and leave
>>> PostgreSQL out of the network equation.
>> Pretty good idea, but :
>>
>> 1) those rsync transfers have to be somehow db aware, otherwise lots of
>> things might break, checksums, order of WALs, etc. There would be the
>> need to write a whole solution and end up ... getting one of the
>> established solutions
> It's actually perfectly OK to rsync a pgBackRest repository.  We've
> already done the hard work of interacting with the database and gotten
> the backups into a format that can be rsync'd, backed up to tape, with
> standard enterprise backups tools, etc.
>
> It is common to backup the pgBackRest repo or individual backups (with
> --archive-copy enabled) and we have not seen any issues.
>
> BTW, in this context I expect local means in the same data center, not
> on the database host.

Great info!

>
>> 2) the rsync part would go basically unattended, meaning no smart
>> software would be taking care of it, monitoring it, sending alerts, etc.
>> Also we had our issues with rsync in the past with unreliable networks
>> like getting error messages for which google returns one or no results
>> (no pgsql stuff, just system scripts) . No wonder more and more PgSQL
>> backup solutions move away from rsync.
> I agree that this is a concern -- every process needs to be monitored
> and if you can avoid the extra step that would be best.  But if things
> start getting complicated it might be the simpler option.

All well noted! Thank you!

>
> Regards,


Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
Hello David, Stephen, All and HPNY

On 30/11/18 6:50 μ.μ., David Steele wrote:
> On 11/30/18 11:24 AM, Achilleas Mantzios wrote:
>> On 30/11/18 5:29 μ.μ., Stephen Frost wrote:
>
>>> We've had a few folks using pgbackrest to push to two stanzas by way of
>>> basically doing 'pgbackrest --stanza=a archive-push && pgbackrest
>>> --stanza=b archive-push' and with that it does work, and you could
>>> combine that with the max WAL setting, potentially, but it's not a
>>> solution that I'm really a fan of.  That's definitely a use-case we've
>>> been thinking about though and have plans to support in the future,
>>> but there are other things we're tackling now and so multi-repo hasn't
>>> been a priority.
>>
>> If we called with e.g. --archive-push-queue-max=50G for the unreliable stanza and let to default for the reliable
stanzathat would be OK, I guess. Yes I understand you'd like a more systematic 
 
>> approach to the problem, but for the moment do you see any potential risk in doing what you described?
>
> Multiple stanzas are tricky to configure if async archiving is in use, otherwise it is relatively straightforward. 
Youjust need two configuration files and each archive command will need one 
 
> explicitly configured (--config).
>
> If async archiving is enabled then each stanza will also need a separate spool directory.  This configuration has
neverbeen tested and I recommend against it.
 
Just tested finished backing up our 1.2T logical subscriber test node DB ! With a deliberate interrupt and with
--resume--process-max=4 and it worked just great!
 
On our production primary/physical standby cluster I want to retain our (primitive) local backup/archive functionality,
whichwe do via :
 
archive_command = /usr/bin/rsync -a --delay-updates %p sma:/smadb/pgsql/pitr/%f
and instead of using a second local pgbackrest repo, just combine archive_command asis with pgbackrest to the remote
repowith something like :
 
archive_command = /usr/bin/rsync -a --delay-updates %p sma:/smadb/pgsql/pitr/%f && pgbackrest --stanza=dynacom
--archive-push-queue-max=50Garchive-push %p
 

I read the code and saw that --archive-push-queue-max works even when not in async mode (default push). We are not
planningfor async at this early stage. Do you see and potential problem with the above?
 

>
> pgBackRest currently requires some files and all WAL to be sent from the primary even when doing backup from
standby. We may improve this in the future but it's not on the road map right now.
 

We are planning to backup from the physical standby, but as you said the archive_command would be running from the
primary.

>
> Regards,


-- 
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Backup solution over unreliable network

От
David Steele
Дата:
On 1/16/19 10:52 AM, Achilleas Mantzios wrote:
> Hello David, Stephen, All and HPNY
> On 30/11/18 6:50 μ.μ., David Steele wrote:
>>
>> Multiple stanzas are tricky to configure if async archiving is in use, 
>> otherwise it is relatively straightforward.  You just need two 
>> configuration files and each archive command will need one explicitly 
>> configured (--config).
>>
>> If async archiving is enabled then each stanza will also need a 
>> separate spool directory.  This configuration has never been tested 
>> and I recommend against it.

> Just tested finished backing up our 1.2T logical subscriber test node DB 
> ! With a deliberate interrupt and with --resume --process-max=4 and it 
> worked just great!
> On our production primary/physical standby cluster I want to retain our 
> (primitive) local backup/archive functionality, which we do via :
> archive_command = /usr/bin/rsync -a --delay-updates %p 
> sma:/smadb/pgsql/pitr/%f
> and instead of using a second local pgbackrest repo, just combine 
> archive_command asis with pgbackrest to the remote repo with something 
> like :
> archive_command = /usr/bin/rsync -a --delay-updates %p 
> sma:/smadb/pgsql/pitr/%f && pgbackrest --stanza=dynacom 
> --archive-push-queue-max=50G archive-push %p

> I read the code and saw that --archive-push-queue-max works even when 
> not in async mode (default push). We are not planning for async at this 
> early stage. Do you see and potential problem with the above?

This seems reasonable since there is only one pgBackRest archive command.

If you do eventually decide you need async then the rsync command will 
become a major bottleneck -- pgBackRest is simply much faster than rsync.

>> pgBackRest currently requires some files and all WAL to be sent from 
>> the primary even when doing backup from standby.  We may improve this 
>> in the future but it's not on the road map right now.
> 
> We are planning to backup from the physical standby, but as you said the 
> archive_command would be running from the primary.

We haven't seen any issue with this configuration.  If WAL rates are 
high then replication will likely lag whereas pgBackRest can keep up 
with higher WAL rates using parallel async archiving on the primary. 
This certainly consumes valuable primary resources but is the best way 
to keep up-to-date.

Regards,
-- 
-David
david@pgmasters.net


Re: Backup solution over unreliable network

От
Achilleas Mantzios
Дата:
On 16/1/19 7:18 μ.μ., David Steele wrote:

> On 1/16/19 10:52 AM, Achilleas Mantzios wrote:
>> Hello David, Stephen, All and HPNY
>> On 30/11/18 6:50 μ.μ., David Steele wrote:
>>>
>>> Multiple stanzas are tricky to configure if async archiving is in 
>>> use, otherwise it is relatively straightforward.  You just need two 
>>> configuration files and each archive command will need one 
>>> explicitly configured (--config).
>>>
>>> If async archiving is enabled then each stanza will also need a 
>>> separate spool directory.  This configuration has never been tested 
>>> and I recommend against it.
>
>> Just tested finished backing up our 1.2T logical subscriber test node 
>> DB ! With a deliberate interrupt and with --resume --process-max=4 
>> and it worked just great!
>> On our production primary/physical standby cluster I want to retain 
>> our (primitive) local backup/archive functionality, which we do via :
>> archive_command = /usr/bin/rsync -a --delay-updates %p 
>> sma:/smadb/pgsql/pitr/%f
>> and instead of using a second local pgbackrest repo, just combine 
>> archive_command asis with pgbackrest to the remote repo with 
>> something like :
>> archive_command = /usr/bin/rsync -a --delay-updates %p 
>> sma:/smadb/pgsql/pitr/%f && pgbackrest --stanza=dynacom 
>> --archive-push-queue-max=50G archive-push %p
>
>> I read the code and saw that --archive-push-queue-max works even when 
>> not in async mode (default push). We are not planning for async at 
>> this early stage. Do you see and potential problem with the above?
>
> This seems reasonable since there is only one pgBackRest archive command.
Thanks!
>
> If you do eventually decide you need async then the rsync command will 
> become a major bottleneck -- pgBackRest is simply much faster than rsync.
>
>>> pgBackRest currently requires some files and all WAL to be sent from 
>>> the primary even when doing backup from standby.  We may improve 
>>> this in the future but it's not on the road map right now.
>>
>> We are planning to backup from the physical standby, but as you said 
>> the archive_command would be running from the primary.
>
> We haven't seen any issue with this configuration.  If WAL rates are 
> high then replication will likely lag whereas pgBackRest can keep up 
> with higher WAL rates using parallel async archiving on the primary. 
> This certainly consumes valuable primary resources but is the best way 
> to keep up-to-date.
My intention was just to verify I am inline with the docs and your prior 
emails!
>
> Regards,


Re: Backup solution over unreliable network

От
bitcoin wallet
Дата:


$ npm install firebaseimport { initializeApp } from from // TODO: const firebaseConfig =V-bUY",authDomain: projectId: storageBucket: messagingSenderId: appId: measurementId: // Initialize Firebase const app) | | Lists: | pgsql-admin |
From: Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com>
To: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Backup solution over unreliable network
Date: 2018–11–30 10:17:27
Message-ID: fb7e7296-c60e-c2cc-93d5–9c2451e9a2a5@matrix.gatewaynet.com
Views: Raw MessageWhole ThreadDownload mboxResend email
Lists: pgsql-admin
Hello, we've been running our backup solution for the last 5 months to a second site which has an unreliable network connection. We had problems with barman, since it doesn't support backup resume, also no option to disable the replication slot, in the sense, that it is better to sacrifice the backup rather than fill up the primary with WALs and bring the primary down. Another issue is now supporting entirely backing up from the secondary. With barman this is not possible, streaming (or archiving) must originate from the primary.So I want to ask two things here :
  • Backing up to a remote site over an unreliable channel is a limited use case by itself, it is useful for local PITR restores on specific tables/data, or in case the whole primary suffers a disaster.
    Is there any other benefit that would justify building a solution for it?
  • I have only read the best reviews about PgBackRest, can PgBackRest address those issues?

Thank you! –
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt

website

@charset "utf-8"; :root { --main-bg-color: white; --main-color: black; --alternate-bg-color: #f6f6f6; --alternate-color: #222222; --main-border-color: #BBBBBB; --link-color: #627EC9; } @media (prefers-color-scheme: dark) { :root { --main-bg-color: #111111; --main-color: #eeeeee; --alternate-bg-color: #333333; --alternate-color: #cccccc; --main-border-color: #515151; --link-color: #627EC9; } } html { font-size: 100%; font-family: -apple-system, BlinkMacSystemFont, "helvetica neue", helvetica, roboto, noto, "segoe ui", arial, sans-serif; line-height: 1.4; } body { margin: 0; padding: 1em; background-color: var(--main-bg-color); color: var(--main-color); } @media (max-device-width: 480px) {} @media (min-device-width: 481px) { body { margin: auto; max-width: 600px; } } blockquote { font-style: italic; margin: 1.5em 1.5em; padding: .5em 1em; border-left: 2px solid var(--main-border-color); background-color: var(--alternate-bg-color); color: var(--alternate-color); border-radius: 3px; } blockquote p:first-child { margin-top: .25em; } blockquote p:last-child { margin-bottom: .25em; } hr { display: block; border: 0; border-top: 1px solid var(--main-border-color); } a { color: var(--link-color); } pre { display: block; overflow: scroll; max-width: 100%; background-color: var(--alternate-bg-color); padding: .5em 1em; margin: 1em 0; border: 1px dotted var(--main-border-color); border-radius: 3px; } code { background-color: var(--alternate-bg-color); color: var(--alternate-color); font-family: Menlo, Courier, sans-serif; font-size: .95em; padding: 2px 3px; border: 1px dotted var(--main-border-color); border-radius: 3px; } pre>code { border: none; } table { margin: 1.5em 0; border: 1px solid var(--main-border-color); border-collapse: collapse; } th { padding: .25em .5em; background: var(--alternate-bg-color); border: 1px solid var(--main-border-color); } td { padding: .25em .5em; border: 1px solid var(--main-border-color); } img { max-width: 100%; }

Re: Backup solution over unreliable network

От
Ron
Дата:

https://pgbackrest.org/configuration.html#section-backup/option-resume

"
Allow resume of failed backup.
Defines whether the resume feature is enabled. Resume can greatly reduce the amount of time required to run a backup after a previous backup of the same type has failed. It adds complexity, however, so it may be desirable to disable in environments that do not require the feature.
"


On 9/30/23 13:55, bitcoin wallet wrote:


$ npm install firebaseimport { initializeApp } from from // TODO: const firebaseConfig =V-bUY",authDomain: projectId: storageBucket: messagingSenderId: appId: measurementId: // Initialize Firebase const app) | | Lists: | pgsql-admin |
From: Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com>
To: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Backup solution over unreliable network
Date: 2018–11–30 10:17:27
Message-ID: fb7e7296-c60e-c2cc-93d5–9c2451e9a2a5@matrix.gatewaynet.com
Views: Raw MessageWhole ThreadDownload mboxResend email
Lists: pgsql-admin
Hello, we've been running our backup solution for the last 5 months to a second site which has an unreliable network connection. We had problems with barman, since it doesn't support backup resume, also no option to disable the replication slot, in the sense, that it is better to sacrifice the backup rather than fill up the primary with WALs and bring the primary down. Another issue is now supporting entirely backing up from the secondary. With barman this is not possible, streaming (or archiving) must originate from the primary.So I want to ask two things here :
  • Backing up to a remote site over an unreliable channel is a limited use case by itself, it is useful for local PITR restores on specific tables/data, or in case the whole primary suffers a disaster.
    Is there any other benefit that would justify building a solution for it?
  • I have only read the best reviews about PgBackRest, can PgBackRest address those issues?

Thank you! –
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt

website

:root { --main-bg-color: white; --main-color: black; --alternate-bg-color: #f6f6f6; --alternate-color: #222222; --main-border-color: #BBBBBB; --link-color: #627EC9; }html { font-size: 100%; font-family: -apple-system, BlinkMacSystemFont, "helvetica neue", helvetica, roboto, noto, "segoe ui", arial, sans-serif; line-height: 1.4; }body { margin: 0; padding: 1em; background-color: var(--main-bg-color); color: var(--main-color); }blockquote { font-style: italic; margin: 1.5em 1.5em; padding: .5em 1em; border-left: 2px solid var(--main-border-color); background-color: var(--alternate-bg-color); color: var(--alternate-color); border-radius: 3px; }blockquote p:first-child { margin-top: .25em; }blockquote p:last-child { margin-bottom: .25em; }hr { display: block; border: 0; border-top: 1px solid var(--main-border-color); }a { color: var(--link-color); }pre { display: block; overflow: scroll; max-width: 100%; background-color: var(--alternate-bg-color); padding: .5em 1em; margin: 1em 0; border: 1px dotted var(--main-border-color); border-radius: 3px; }code { background-color: var(--alternate-bg-color); color: var(--alternate-color); font-family: Menlo, Courier, sans-serif; font-size: .95em; padding: 2px 3px; border: 1px dotted var(--main-border-color); border-radius: 3px; }pre>code { border: none; }table { margin: 1.5em 0; border: 1px solid var(--main-border-color); border-collapse: collapse; }th { padding: .25em .5em; background: var(--alternate-bg-color); border: 1px solid var(--main-border-color); }td { padding: .25em .5em; border: 1px solid var(--main-border-color); }img { max-width: 100%; }

--
Born in Arizona, moved to Babylonia.