Re: pg_dump large-file support > 16GB
| От | Aly Dharshi |
|---|---|
| Тема | Re: pg_dump large-file support > 16GB |
| Дата | |
| Msg-id | 4239C14E.90101@telus.net обсуждение исходный текст |
| Ответ на | Re: pg_dump large-file support > 16GB (Tom Lane <tgl@sss.pgh.pa.us>) |
| Ответы |
Re: pg_dump large-file support > 16GB
|
| Список | pgsql-general |
Would it help to use a different filesystem like SGI's XFS ? Would it be
possible to even implement that at you site at this stage ?
Tom Lane wrote:
> Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no> writes:
>
>>We are trying to dump a 30GB+ database using pg_dump with the --file
>>option. In the beginning everything works fine, pg_dump runs and we get
>>a dumpfile. But when this file becomes 16GB it disappears from the
>>filesystem, pg_dump continues working without giving an error until it
>>finnish (even when the file does not exist)(The filesystem has free
>>space).
>
>
> Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
> different if you just write to stdout instead of using --file?
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
--
Aly Dharshi
aly.dharshi@telus.net
"A good speech is like a good dress
that's short enough to be interesting
and long enough to cover the subject"
В списке pgsql-general по дате отправления: