Re: dealing with file size when archiving databases

Поиск
Список
Период
Сортировка
От Vivek Khera
Тема Re: dealing with file size when archiving databases
Дата
Msg-id EBC16E9E-8140-475B-8E50-2E928EDD6CA4@khera.org
обсуждение исходный текст
Ответ на dealing with file size when archiving databases  ("Andrew L. Gould" <algould@datawok.com>)
Список pgsql-general
On Jun 20, 2005, at 10:28 PM, Andrew L. Gould wrote:

> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.

I ran into this as well.  Apparently FreeBSD will not read a large
file on an ISO file system even though on a standard UFS or UFS2 fs
it will read files larger than you can make :-).

What I used to do was "split -b 1024m my.dump my.dump-split-" to
create multiple files and burn those to the DVD.  To restore, you
"cat my.dump.split.?? | pg_restore" with appropriate options to
pg_restore.

My ultimate fix was to start burning and reading the DVD's on my
MacOS desktop instead, which can read/write these large files just
fine :-)


Vivek Khera, Ph.D.
+1-301-869-4449 x806



В списке pgsql-general по дате отправления:

Предыдущее
От: "FERREIRA, William (COFRAMI)"
Дата:
Сообщение: Re: compilation postgresql/solaris error
Следующее
От: "Prasad Duggineni"
Дата:
Сообщение: Re: 8.03 postgres install error