Re: pg_dump large-file support > 16GB
| От | Tom Lane |
|---|---|
| Тема | Re: pg_dump large-file support > 16GB |
| Дата | |
| Msg-id | 6443.1111157914@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: pg_dump large-file support > 16GB (Rafael Martinez <r.m.guerrero@usit.uio.no>) |
| Ответы |
Re: pg_dump large-file support > 16GB
Re: pg_dump large-file support > 16GB |
| Список | pgsql-general |
Rafael Martinez <r.m.guerrero@usit.uio.no> writes:
> On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
>> Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
>> different if you just write to stdout instead of using --file?
> - In this example, it is a plain text (--format=3Dp).
> - If I write to stdout and redirect to a file, the dump finnish without
> problems and I get a dump-text-file over 16GB without problems.
In that case, you have a glibc or filesystem bug and you should be
reporting it to Red Hat. The *only* difference between writing to
stdout and writing to a --file option is that in one case we use
the preopened "stdout" FILE* and in the other case we do
fopen(filename, "w"). Your report therefore is stating that there
is something broken about fopen'd files.
regards, tom lane
В списке pgsql-general по дате отправления: