Re: pg_dump large-file support > 16GB
| От | Tom Lane |
|---|---|
| Тема | Re: pg_dump large-file support > 16GB |
| Дата | |
| Msg-id | 24124.1111072637@sss.pgh.pa.us обсуждение |
| Ответ на | pg_dump large-file support > 16GB (Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no>) |
| Ответы |
Re: pg_dump large-file support > 16GB
Re: pg_dump large-file support > 16GB |
| Список | pgsql-general |
Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem, pg_dump continues working without giving an error until it
> finnish (even when the file does not exist)(The filesystem has free
> space).
Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
different if you just write to stdout instead of using --file?
regards, tom lane
В списке pgsql-general по дате отправления: