>
> I think soon people are going to start calling me Mr. Big...Tables...
>
> I have a big table. 40M rows.
> On the disk, it's size is:
> 2,090,369,024 bytes. So 2 gigs. On a 9 gig drive I can't sort this table.
> How should one decide based on table size how much room is needed?
>
> Also, this simple table consisting of only 2 int4 values is the exact size
> of an equally sized table consisting of only one int2. There seems to be
> too much overhead here. I realise there are extra things that have to be
> saved, but I am not getting the size/performance I had hoped for... I am
> starting to think this segment of the database would be better implemented
> without a dbms because it is not expected to change at all...
>
It is taking so much disk space because it is using a TAPE sorting
method, by breaking the file into tape chunks and sorting in pieces, the
merging.
Can you try increasing your postgres -S parameter to some huge amount like 32MB
and see if that helps? It should.
i.e.
postmaster -i -B 400 $DEBUG -o '-F -S 1024' "$@" >server.log 2>&1
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)