Re: Large file support needed? Trying to identify root of
От | Scott Marlowe |
---|---|
Тема | Re: Large file support needed? Trying to identify root of |
Дата | |
Msg-id | 1090267674.709.5.camel@localhost.localdomain обсуждение исходный текст |
Ответ на | Large file support needed? Trying to identify root of error. (Kris Kiger <kris@musicrebellion.com>) |
Список | pgsql-admin |
On Mon, 2004-07-19 at 13:28, Kris Kiger wrote: > I've got a database that is a single table with 5 integers, a timestamp > with time zone, and a boolean. The table is 170 million rows in length. > The contents of the tar'd dump file it produced using: > pg_dump -U postgres -Ft test > test_backup.tar > is: 8.dat (approximately 8GB), a toc, and restore.sql. > > No errors are reported on dump, however, when a restore is attempted I get: > > ERROR: unexpected message type 0x58 during COPY from stdin > CONTEXT: COPY test_table, line 86077128: "" > ERROR: could not send data to client: Broken pipe > CONTEXT: COPY test_table, line 86077128: "" > > I am doing the dump & restore on the same machine. > > Any ideas? If the file is too large, is there anyway postgres could > break it up into smaller chunks for the tar when backing up? Thanks for > the help! How, exactly, are you restoring? Doing things like: cat file|pg_restore ... can cause problems because cat is often limited to 2 gigs on many OSes. Just use a redirect: psql dbname <file
В списке pgsql-admin по дате отправления: