Re: WAL and master multi-slave replication

Поиск
Список
Период
Сортировка
От Scot Kreienkamp
Тема Re: WAL and master multi-slave replication
Дата
Msg-id 37752EAC00ED92488874A27A4554C2F303330A20@lzbs6301.na.lzb.hq
обсуждение исходный текст
Ответ на Re: WAL and master multi-slave replication  (Alvaro Herrera <alvherre@commandprompt.com>)
Список pgsql-general

Thanks,

Scot Kreienkamp
La-Z-Boy Inc.
skreien@la-z-boy.com
734-242-1444 ext 6379

-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Alvaro Herrera
Sent: Wednesday, June 24, 2009 1:51 PM
To: Eduardo Morras
Cc: Scott Marlowe; pgsql-general@postgresql.org
Subject: Re: [GENERAL] WAL and master multi-slave replication

Eduardo Morras escribió:
> At 19:25 24/06/2009, you wrote:
>> On Wed, Jun 24, 2009 at 11:22 AM, Eduardo Morras<emorras@s21sec.com> wrote:
>> > Yes, there will be 3 masters recolleting data (doing updates, inserts and
>> > deletes) for now and 5 slaves where we will do the searches. The
>> slaves must
>> > have all the data recollected by the 3 masters and the system must be
>> easily
>> > upgradable, adding new masters and new slaves.
>>
>> You know you can't push WAL files from > 1 server into a slave, right?
>
> No, i didn't know that.

I guess you don't know either that you can't query a slave while it is
on recovery (so it's only a "warm" standby, not hot).  And if you bring
it up you can't afterwards continue applying more segments later.

What you can do is grab a filesystem snapshot before bringing it online,
and then restoring that snapshot when you want to apply some more
segments to bring it up to date (so from Postgres' point of view it
seems like it was never brought up in the first place).


That is what I do.  I actually have two separate copies of Postgres running at any given time on one of my mirrors.
Thefirst is running recovery constantly.  The second is an LVM snapshot that is mounted on a different directory that
listenson the network IP address.  Every hour I have a script that shuts down both copies of Postgres, re-creates and
remountsthe new snapshot, alters the Postgresql.conf listen address, brings the LVM snapshot Postgres out of recovery,
andthen starts both copies of Postgres again.  It takes about 60 seconds for the whole process with a few sleep
statementsto smooth things out.  It guarantees my PITR mirror is still running and allows the mirror to be queryable.
That'sthe best solution I could figure out to fit my requirements.   

BTW, PITRtools is very nice.  I had it scripted in 8.2, when 8.3 came out I switched to PITRtools so it would delete
theWAL logs I no longer needed.  Very nice, and much easier than my old scripts.   

В списке pgsql-general по дате отправления:

Предыдущее
От: durumdara
Дата:
Сообщение: Re: [Fwd: Re: Python client + select = locked resources???]
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Big Delete Consistently Causes a Crash