Обсуждение: Hot Standby vs slony
Good afternoon,
We currently run postgres 9.4 We currently run the following:
----------------------------------------------|
---------------- |--> slony (reporting, hi-availabilty) |
production | ----->| --------------------------------------------- |
--
---------------- |
|--------------------------------------|
|-> hot standby (dr) |
---------------------------------------|
We would like to replace slony with another instance of hot standby as follows:
----------------------------------------------|
---------------- |--> hot standby1 (reporting, ha) |
production | ----->| --------------------------------------------| |
---------------- |
|--------------------------------------|
|-> hot standby2 (dr) |
---------------------------------------|
Is this possible? I see in the documentation it is possible for warm standby but don't
see a confirmation in the section on hot standby.
Thank you,
Mark Steben
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
www.drivedominion.com
On Thu, Feb 8, 2018 at 1:09 PM, Mark Steben <mark.steben@drivedominion.com> wrote:
Good afternoon,We currently run postgres 9.4 We currently run the following:----------------------------------------------| ---------------- |--> slony (reporting, hi-availabilty) |production | ----->| --------------------------------------------- | ---------------- ||--------------------------------------| |-> hot standby (dr) |---------------------------------------| We would like to replace slony with another instance of hot standby as follows:----------------------------------------------| ---------------- |--> hot standby1 (reporting, ha) |production | ----->| --------------------------------------------| | ---------------- ||--------------------------------------| |-> hot standby2 (dr) |---------------------------------------| Is this possible? I see in the documentation it is possible for warm standby but don'tsee a confirmation in the section on hot standby.
Yes, you can run multiple hot standby's from the primary, or cascade the hot standby's from each other (and combinations of both).
I can say that with confidence as one of the common configurations I'm running (for roughly 1500 servers) consists of a primary PG cluster with a hot standby using streaming replication (async replication) within the same data centre, with a remote "primary" hot standby fed by WAL shipping, and a remote hot standby streaming off that. The remote primary is running with delayed WAL application, which varies between 1 and 4 hours, depending on the class of replica sets we are running. This configuration covers basic DR, HA, and in case of user-error we can fail over (promote the remote primary replica before any user-destructive changes are applied to the remote hot standby). One of the caveats is that a sudden interruption between DC's followed by a failover could result in some data loss, depending on the archive_timeout/WAL switch rate etc, but that's a business RPO that we've agreed upon with clients.
I can say that with confidence as one of the common configurations I'm running (for roughly 1500 servers) consists of a primary PG cluster with a hot standby using streaming replication (async replication) within the same data centre, with a remote "primary" hot standby fed by WAL shipping, and a remote hot standby streaming off that. The remote primary is running with delayed WAL application, which varies between 1 and 4 hours, depending on the class of replica sets we are running. This configuration covers basic DR, HA, and in case of user-error we can fail over (promote the remote primary replica before any user-destructive changes are applied to the remote hot standby). One of the caveats is that a sudden interruption between DC's followed by a failover could result in some data loss, depending on the archive_timeout/WAL switch rate etc, but that's a business RPO that we've agreed upon with clients.