Re: Merge algorithms for large numbers of "tapes"

Поиск
Список
Период
Сортировка
От Zeugswetter Andreas DCP SD
Тема Re: Merge algorithms for large numbers of "tapes"
Дата
Msg-id E1539E0ED7043848906A8FF995BDA579D991B7@m0143.s-mxs.net
обсуждение исходный текст
Ответ на Merge algorithms for large numbers of "tapes"  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
> > This amounts to an assumption that you have infinite work_mem, in
> which
> > case you hardly need an external sort at all.  If your
> work_mem is in
> > fact finite, then at some point you need more than two passes.  I'm
> not
> > really interested in ripping out support for sort
> operations that are
> > much larger than work_mem.
>
> No it does not.  I have explained this before.  You can have
> one million files and merge them all into a final output with
> a single pass.  It does not matter how big they are or how
> much memory you have.

Hh ? But if you have too many files your disk access is basically
then going to be random access (since you have 1000nds of files per
spindle).
From tests on AIX I have pretty much concluded, that if you read
256k blocks at a time though, random access does not really hurt that
much
any more.
So, if you can hold 256k per file in memory that should be sufficient.

Andreas


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stefan Kaltenbrunner
Дата:
Сообщение: Re: problem with large maintenance_work_mem settings and
Следующее
От: Hannu Krosing
Дата:
Сообщение: Re: Merge algorithms for large numbers of "tapes"