Jim Carroll <jim@carroll.com> writes:
>> It is currently unclear as to what will happen when you table reaches 2G
>> of storage on most file systems. I think that >2G table handling got
>> broken somehow.
> I know this is probably a "loaded" question, but do have any idea what
> might be the cause of this limitation ?
Postgres does have logic for coping with tables > 2Gb by splitting them
into multiple Unix files. Peter Mount recently reported that this
feature appears to be broken in the current sources (cf hackers mail
list archive for 25/Jan/99). I don't think anyone has followed up on
the issue yet. (I dunno about the other developers, but I don't have a
few Gb of free space to spare so I can't test it...) You could make a
useful contribution by either determining that the feature does work, or
fixing it if it's busted. Probably wouldn't be a very complex fix, but
I've never looked at that part of the code.
If your total database will exceed the space available on a single
filesystem on your platform, you will have to play some games with
symbolic links in order to spread the table files across multiple
filesystems. I don't know of any gotchas in doing that, but it's
kind of a pain for the DB admin to have to do it by hand.
regards, tom lane