Re: Strategy for moving a large DB to another machine with least possible down-time

Поиск
Список
Период
Сортировка
От Andy Colson
Тема Re: Strategy for moving a large DB to another machine with least possible down-time
Дата
Msg-id 541F1C11.3040202@squeakycode.net
обсуждение исходный текст
Ответ на Strategy for moving a large DB to another machine with least possible down-time  (Andreas Joseph Krogh <andreas@visena.com>)
Список pgsql-general
On 09/21/2014 06:36 AM, Andreas Joseph Krogh wrote:
> Hi all.
> PG-version: 9.3.5
> I have a DB large enough for it to be impractical to pg_dump/restore it (would require too much down-time for
customer).Note that I'm noe able to move the whole cluster, only *one* DB in that cluster. 
> What is the best way to perform such a move, can i use PITR, rsync + webl-replay magic, what else?
> Can Barman help with this, maybe?
> Thanks.
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andreas@visena.com <mailto:andreas@visena.com>
> www.visena.com <https://www.visena.com>
> <https://www.visena.com>

I had a less big'sih table I wanted to move, but not everything else.  I had a timestamp on the table I could use for
"closeenough to unique".  I wrote a perl script that would dump 100K records at a time (ordered by the timestamp).  It
woulddump records and then disconnect and sleep for 30 seconds'ish which kept usage low. 

It took a while, but once it caught up, I changed the script to get the max(timestamp) from olddb and newdb and only
copythe missing ones.  I could keep them in sync this way until I was ready to switch over. 

-Andy


В списке pgsql-general по дате отправления:

Предыдущее
От: Andreas Joseph Krogh
Дата:
Сообщение: Re: Strategy for moving a large DB to another machine with least possible down-time
Следующее
От: Evan Martin
Дата:
Сообщение: Detecting query timeouts properly