dealing with file size when archiving databases

Поиск
Список
Период
Сортировка
От Andrew L. Gould
Тема dealing with file size when archiving databases
Дата
Msg-id 200506202128.51463.algould@datawok.com
обсуждение исходный текст
Ответы Re: dealing with file size when archiving databases  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: dealing with file size when archiving databases  (Alvaro Herrera <alvherre@surnet.cl>)
Re: dealing with file size when archiving databases  (Tino Wildenhain <tino@wildenhain.de>)
Re: dealing with file size when archiving databases  (Vivek Khera <vivek@khera.org>)
Список pgsql-general
I've been backing up my databases by piping pg_dump into gzip and
burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has
problems dealing with very large files (>1GB?) on DVD media.  One of my
compressed database backups is greater than 1GB; and the results of a
gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
the iso image and burning the image to DVD-R finish without any
problems; but the resulting file is unreadable/unusable.

My proposed solution is to modify my python script to:

1. use pg_dump to dump each database's tables individually, including
both the database and table name in the file name;
3. use 'pg_dumpall -g' to dump the global information; and
4. burn the backup directories, files and a recovery script to DVD-R.

The script will pipe pg_dump into gzip to compress the files.

My questions are:

1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump?  Will I
be missing anything?
2. Does anyone foresee any problems with the solution above?

Thanks,

Andrew Gould

В списке pgsql-general по дате отправления:

Предыдущее
От: "Jason Tesser"
Дата:
Сообщение: problems with types after update to 8.0
Следующее
От: Michael Fuhr
Дата:
Сообщение: Re: External (asynchronous) notifications of database updates