incremental backup of postgres database?

Поиск
Список
Период
Сортировка
От Palle Girgensohn
Тема incremental backup of postgres database?
Дата
Msg-id 83540000.1044632843@rambutan.pingpong.net
обсуждение исходный текст
Список pgsql-admin
Hi!

What would be the best suggestion for incremental backup of a rather large
database, where the bulk data volume consists of large objects. Since
backup will be transmitted over a 2 Mbit/s internet line, we need to
minimize the data flow for each nightly backup. The compressed database
dump file, when dumped with pg_dump -F c -b, is roughly 2.5 GB, whereas a
dump without large objects is roughly is only 2% that size. I can live with
having to transfer the BLOB-less dump every night, but not several
gigabytes of data...

So, I will either need to find  a way to get the latest data (I have
timestamps for all LOBS) and somehow get it to a file in a restorable
format... One simple way would be to select all new blobs into a temp file
and copy that table to a backup file

or

replicate the database in real time to the backup site, using one of the
replication projects? How robust are the replication systems today? What
will happen if the 2Mb/s line fails temporarily?

Perhaps there are other ideas for incremental backup of postgres databases?
Your input would be much appreciated.

Thanks
Palle


В списке pgsql-admin по дате отправления:

Предыдущее
От: Jerry Asher
Дата:
Сообщение: Re: Sql Management Tool to download
Следующее
От: Leslie
Дата:
Сообщение: noatime.