Re: large file limitation

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: large file limitation
Дата
Msg-id 11359.1011405107@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: large file limitation  (Jan Wieck <janwieck@yahoo.com>)
Ответы Re: large file limitation  (Jan Wieck <janwieck@yahoo.com>)
Re: large file limitation  (Andrew Sullivan <andrew@libertyrms.info>)
Список pgsql-general
Jan Wieck <janwieck@yahoo.com> writes:
>>> I suppose I need to recompile Postgres now on the system now that it
>>> accepts large files.
>>
>> Yes.

>     No.  PostgreSQL is totally fine with that limit, it will just
>     segment huge tables into separate files of 1G max each.

The backend is fine with it, but "pg_dump >outfile" will choke when
it gets past 2Gb of output (at least, that is true on Solaris).

I imagine "pg_dump | split" would do as a workaround, but don't have
a Solaris box handy to verify.

I can envision building 32-bit-compatible stdio packages that don't
choke on large files unless you actually try to do ftell or fseek beyond
the 2G boundary.  Solaris' implementation, however, evidently fails
hard at the boundary.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Jan Wieck
Дата:
Сообщение: Re: large file limitation
Следующее
От: Jan Wieck
Дата:
Сообщение: Re: large file limitation