Обсуждение: Backup issue

Поиск
Список
Период
Сортировка

Backup issue

От
Marcin Giedz
Дата:
Hello...

This is what I have now: postgresql 8.0.1 - database weights about 60GB
and increases about 2GB per week. Nowadays I do backup every day -
according to simple procedure (pg_start_backup:rsync
data:pg_stop_backup:save wals produced during backup). On 1Gb internal
network it usually takes me about 1h to perform this procedure.

But what if my database has ~200GB and more (I know this is a future
:D)? From my point of view it won't be good idea to copy entire database
to backup array. I would like to here opinions about this case - what do
you propose? Maybe some of you already do something like this?

Regards,
Marcin Giedz

Re: Backup issue

От
Jeff Frost
Дата:
Marcin,

What I have done in the past is setup a bi-monthly base backup
(pg_start_backup:rsync data:pg_stop_backup) and archive the WAL files between
base backups with the archive_command option in postgresql.conf, saving 2 base
backups and removing anything older than the oldest base backup.  If you use
rsync with the --link-dest option you can possibly save much space between the
two base backups if you do not have many deletes in your DB.  The --link-dest
option assume you are rsyncing to a filesystem which supports hardlinks.  I
also gzip the WAL files when archiving them.

> But what if my database has ~200GB and more (I know this is a future :D)?
> From my point of view it won't be good idea to copy entire database to backup
> array. I would like to here opinions about this case - what do you propose?
> Maybe some of you already do something like this?

--
Jeff Frost, Owner     <jeff@frostconsultingllc.com>
Frost Consulting, LLC     http://www.frostconsultingllc.com/
Phone: 650-780-7908    FAX: 650-649-1954

Re: Backup issue

От
Scott Marlowe
Дата:
On Sat, 2005-09-17 at 02:49, Marcin Giedz wrote:
> Hello...
>
> This is what I have now: postgresql 8.0.1 - database weights about 60GB
> and increases about 2GB per week. Nowadays I do backup every day -
> according to simple procedure (pg_start_backup:rsync
> data:pg_stop_backup:save wals produced during backup). On 1Gb internal
> network it usually takes me about 1h to perform this procedure.
>
> But what if my database has ~200GB and more (I know this is a future
> :D)? From my point of view it won't be good idea to copy entire database
> to backup array. I would like to here opinions about this case - what do
> you propose? Maybe some of you already do something like this?

I'd look at using PITR replication, with a monthly or so fresh, whole
backup instead of a whole backup every day.