Re: Dump large DB and restore it after all.

Поиск
Список
Период
Сортировка
От Condor
Тема Re: Dump large DB and restore it after all.
Дата
Msg-id ae75615f39f1f8f78cfefef707ec48ea@stz-bg.com
обсуждение исходный текст
Ответ на Re: Dump large DB and restore it after all.  (Craig Ringer <craig@postnewspapers.com.au>)
Ответы Re: Dump large DB and restore it after all.
Re: Dump large DB and restore it after all.
Список pgsql-general
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
> On 5/07/2011 5:00 PM, Condor wrote:
>> Hello ppl,
>> can I ask how to dump large DB ?
>
> Same as a smaller database: using pg_dump . Why are you trying to
> split your dumps into 1GB files? What does that gain you?
>
> Are you using some kind of old file system and operating system that
> cannot handle files bigger than 2GB? If so, I'd be pretty worried
> about running a database server on it.

Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.

>
> As for gzip: gzip is almost perfectly safe. The only downside with
> gzip is that a corrupted block in the file (due to a hard
> disk/dvd/memory/tape error or whatever) makes the rest of the file,
> after the corrupted block, unreadable. Since you shouldn't be storing
> your backups on anything that might get corrupted blocks, that should
> not be a problem. If you are worried about that, you're better off
> still using gzip and using an ECC coding system like par2 to allow
> recovery from bad blocks. The gzipd dump plus the par2 file will be
> smaller than the uncompressed dump, and give you much better
> protection against errors than an uncompressed dump will.
>
> To learn more about par2, go here:
>
>   http://parchive.sourceforge.net/


Thank you for info.

> --
> Craig Ringer
>

--
Regards,
Condor

В списке pgsql-general по дате отправления:

Предыдущее
От: Alexander Shulgin
Дата:
Сообщение: Select count(*) /*from*/ table
Следующее
От: Geoffrey Myers
Дата:
Сообщение: Re: out of memory error