Lynn.Tilby@asu.edu wrote:
> Oleg,
>
> I worked for a company where we crunched 16+ terabytes of
> multiple drug company sales data per month. We of course
> used Sun's BIGGEST boxes and Oracle. But! the secret
> to doing it was LOTS of memory and putting the indexes,
> pointers, etc. in arrays and using Oracle OCI calls.
>
> If it is possible to use some lower level funcs, perhaps
> funcs like you see in the .c output of ecpg this might
> theoretically be possible in postgres. I know that
> putting the index references or functionally similar data
> in arrays, and doing the data location resolution to
> the logical or even physical disk location is not only
> extreamly FAST but possible, having done it on raw
> partitions and optical disks myself in custom designed
> data bases. This approach would work extreamly fast
> with only 20 million rows. Yes, it would require some
> extra programming but if you need REAL TIME response
> this approach could help solve your problem.
Each row has a ctid(tid) value that represents the physical page and
offset on the page, and you could store that, but and update to the
row would change the tid, as would a VACUUM FULL, and a delete could
reuse the tid. Not pretty, but could be done.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073