restore of large databases failing--any ideas?

Поиск
Список
Период
Сортировка
От s_hawkins@mindspring.com (S. Hawkins)
Тема restore of large databases failing--any ideas?
Дата
Msg-id a7fac81d.0404071112.2d193673@posting.google.com
обсуждение исходный текст
Ответы Re: restore of large databases failing--any ideas?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Hi all,

We're using pg_dump to backup our databases.  The actual pg_dump
appears to work fine.  On smaller (< approx. 100 Meg) data sets, the
restore also works, but on larger data sets the restore process
consistently fails.

Other facts that may be of interest:
 * We're running Postgres 7.2.3 on a more-or-less stock Red Hat 7.3
platform. * Backup is done with "pg_dump -c -U postgres", then gzip * Restore is via "cat <archive_file> | gunzip |
psql"
 

The particular file I'm wrestling with at the moment is ~2.2 Gig
unzipped.  If you try to restore using pg_restore, the process
immediately fails with the following:
   pg_restore: [archiver] could not open input file: File too large

When the data file is gzip'd, you can at least get the restore process
started with the following:
    cat archive_file.gz | gunzip | psql dbname

The above command line starts OK, but eventually fails with:
    server closed the connection unexpectedly       This probably means the server terminated abnormally       before
orwhile processing the request.       connection to server was lost
 


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dennis Bjorklund
Дата:
Сообщение: Re: locale
Следующее
От: Rod Taylor
Дата:
Сообщение: Re: Function to kill backend