> Basically, move the first 100 rows to the end of the table file, then take
> 100 and write it to position 0, 101 to position 1, etc ... that way, at
> max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
> size ... either method is going to lock the file for a period of time, but
> one is much more friendly as far as disk space is concerned *plus*, if RAM
> is available for this, it might even be something that the backend could
> use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
> the table is 24Meg in size, it could do it all in memory?
Yes, I liked that too.
-- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026