Re: WAL scenario valid?

Поиск
Список
Период
Сортировка
От Sergey Konoplev
Тема Re: WAL scenario valid?
Дата
Msg-id CAL_0b1sfVWM_nm5iec1RdsFHOhAC=KUSm+mRuaF_xiozX-zaKw@mail.gmail.com
обсуждение исходный текст
Ответ на WAL scenario valid?  (Keith Ouellette <Keith.Ouellette@Airgas.com>)
Ответы Re: WAL scenario valid?  (prakhar jauhari <prak840@gmail.com>)
Список pgsql-admin
On Mon, Jun 17, 2013 at 8:14 AM, Keith Ouellette
<Keith.Ouellette@airgas.com> wrote:
> I got the initial WAL replication going between DB1 and DB2 and between the
> virtual IP (currently on DB1) and DB3. So far all is well. However, If I
> simulate a failover to DB2 (promote DB2 and move the virtual IP to it), WAL
> does not resume between the virtual IP and DB3. I tried restarting
> PostgreSQL on DB2, but that did not help. I also tried restarting PostgreSQL
> on DB2 to see if that would kick start it, but it did not. The only way I
> could get WAL between the Virutal IP and DB3 is to do a manual sync using
> rsync.

[skipped]

> Is what I am trying to do possible?

It is.

In your situation, when both replicas following the master, in case of
the master's failure you need to find the most caught up replica. To
do this compare WAL replay locations on replicas and chose the one
with the biggest value.

SELECT pg_last_xlog_replay_location();

Note, that If you chose not the most caught up one, than other
replicas that have replayed later WAL entries must be reconfigured
from scratch. Otherwise their data might be corrupted and you will not
get any warnings about it.

In the case of the planned switchover, choose one that will be a new
master on your own.

Then stop all the slaves except the new master.

Use the command below to guarantee that the master and the remaining
slave are ahead of other (stopped) slaves if you are not sure that
they already are. The command creates a minimal WAL entry.

SELECT txid_current();

Touch the failover file on the remaining slave to promote it as a new master.

On the stopped slaves delete everything from the pg_xlog directory and
copy the pg_xlog/*.history there from the new master.

Then change the DSN to point to the new master and add the following
instruction in recovery.conf. This will make replicas to follow the
latest created timeline.

recovery_target_timeline = 'latest'

Start postgres on these slaves and that is it.

Alternatively, if your postgres version is 9.2, you can setup
cascading replicasion, so db2 will follow db1 and db3 will follow db2.
In case of db1 failover, all you need to do is to promote db2.
However, you need to remember that if db2 fails you will have to
change DSN in recovery.conf on db3 to point to db1, or to redirect
your db2's virtual IP. Yes, I should have probably start with this
solution.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray.ru@gmail.com


В списке pgsql-admin по дате отправления:

Предыдущее
От: Lonni J Friedman
Дата:
Сообщение: Re: pg_upgrade with -j shows no perf improvement
Следующее
От: jmfox180
Дата:
Сообщение: PG_UPGRADE issues "this utility can only upgrade to postgresql version 9.2."