Jan Wieck <janwieck@yahoo.com> writes:
>>> I suppose I need to recompile Postgres now on the system now that it
>>> accepts large files.
>>
>> Yes.
> No. PostgreSQL is totally fine with that limit, it will just
> segment huge tables into separate files of 1G max each.
The backend is fine with it, but "pg_dump >outfile" will choke when
it gets past 2Gb of output (at least, that is true on Solaris).
I imagine "pg_dump | split" would do as a workaround, but don't have
a Solaris box handy to verify.
I can envision building 32-bit-compatible stdio packages that don't
choke on large files unless you actually try to do ftell or fseek beyond
the 2G boundary. Solaris' implementation, however, evidently fails
hard at the boundary.
regards, tom lane