> > Well, I think it might be optimised slightly. Am I right that postgres
> > uses heap (i.e. they look like tables) files during sorting? While this
> > is a merge sort, those files doesn't have to be a table-like files.
> > Certainly, they might variable length records without pages (aren't they
> > used sequentially). Moreover we would consider packing tape files before
> > writting them down if necessary. Of course it will result in some
> > performance dropdown. However it's better to have less performance that
> > being unable to sort it at all.
> >
> > Last question... What's the purpose of such a big sort? If somebody gets
> > 40M of sorted records in a result of some query, what would he do with
> > it? Is he going to spent next years on reading this lecture? I mean,
> > isn't it worth to query the database for necessary informations only and
> > then sort it?
>
> this I don't know...I never even really thought about that,
> actually...Michael? :) Only you can answer that one.
I have an idea. Can he run CLUSTER on the data? If so, the sort will
not use small batches, and the disk space during sort will be reduced.
However, I think CLUSTER will NEVER finish on such a file, unless it is
already pretty well sorted.
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)