Re: Merge algorithms for large numbers of "tapes"

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Merge algorithms for large numbers of "tapes"
Дата
Msg-id 15725.1141834850@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Merge algorithms for large numbers of "tapes"  ("Jim C. Nasby" <jnasby@pervasive.com>)
Ответы Re: Merge algorithms for large numbers of "tapes"  ("Jim C. Nasby" <jnasby@pervasive.com>)
Список pgsql-hackers
"Jim C. Nasby" <jnasby@pervasive.com> writes:
> If we do have to fail to disk, cut back to 128MB, because having 8x that
> certainly won't make the sort run anywhere close to 8x faster.

Not sure that follows.  In particular, the entire point of the recent
changes has been to extend the range in which we can use a single merge
pass --- that is, write the data once as N sorted runs, then merge them
in a single read pass.  As soon as you have to do an actual merge-back-
to-disk pass, your total I/O volume doubles, so there is definitely a
considerable gain if that can be avoided.  And a larger work_mem
translates directly to fewer/longer sorted runs.
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Fetter
Дата:
Сообщение: Re: [PATCHES] Add switches for DELIMITER and NULL in pg_dump COPY
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Add switches for DELIMITER and NULL in pg_dump COPY