Re: Merge algorithms for large numbers of "tapes"

Поиск
Список
Период
Сортировка
От Jim C. Nasby
Тема Re: Merge algorithms for large numbers of "tapes"
Дата
Msg-id 20060308174904.GD45250@pervasive.com
обсуждение исходный текст
Ответ на Re: Merge algorithms for large numbers of "tapes"  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Merge algorithms for large numbers of "tapes"  ("Luke Lonergan" <llonergan@greenplum.com>)
Re: Merge algorithms for large numbers of "tapes"  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Wed, Mar 08, 2006 at 11:20:50AM -0500, Tom Lane wrote:
> "Jim C. Nasby" <jnasby@pervasive.com> writes:
> > If we do have to fail to disk, cut back to 128MB, because having 8x that
> > certainly won't make the sort run anywhere close to 8x faster.
> 
> Not sure that follows.  In particular, the entire point of the recent
> changes has been to extend the range in which we can use a single merge
> pass --- that is, write the data once as N sorted runs, then merge them
> in a single read pass.  As soon as you have to do an actual merge-back-
> to-disk pass, your total I/O volume doubles, so there is definitely a
> considerable gain if that can be avoided.  And a larger work_mem
> translates directly to fewer/longer sorted runs.

But do fewer/longer sorted runs translate into not merging back to disk?
I thought that was controlled by if we had to be able to rewind the
result set.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: Running out of disk space during query
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: problem with large maintenance_work_mem settings and