On Wed, 3 Feb 1999, Tom Lane wrote:
> Jim Carroll <jim@carroll.com> writes:
> >> It is currently unclear as to what will happen when you table reaches 2G
> >> of storage on most file systems. I think that >2G table handling got
> >> broken somehow.
>
> > I know this is probably a "loaded" question, but do have any idea what
> > might be the cause of this limitation ?
>
> Postgres does have logic for coping with tables > 2Gb by splitting them
> into multiple Unix files. Peter Mount recently reported that this
> feature appears to be broken in the current sources (cf hackers mail
> list archive for 25/Jan/99). I don't think anyone has followed up on
> the issue yet. (I dunno about the other developers, but I don't have a
> few Gb of free space to spare so I can't test it...) You could make a
> useful contribution by either determining that the feature does work, or
> fixing it if it's busted. Probably wouldn't be a very complex fix, but
> I've never looked at that part of the code.
I tested it as I had a few free gig, and although it split the file at
2gig, it wouldn't extend further.
I started browsing the source the other day, and at first it looks ok. I
have a feeling it's something simple, and I'm planning to try it again
this week end.
The problem I have is that it takes 4 hours for a table to reach 2Gb on my
system, so it's a slow process :-(
Peter
--
Peter T Mount peter@retep.org.uk
Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres
Java PDF Generator: http://www.retep.org.uk/pdf