Re: [HACKERS] A Better External Sort?

Поиск
Список
Период
Сортировка
От Jim C. Nasby
Тема Re: [HACKERS] A Better External Sort?
Дата
Msg-id 20051008225155.GA16679@pervasive.com
обсуждение исходный текст
Ответ на Re: [HACKERS] A Better External Sort?  ("Zeugswetter Andreas DAZ SD" <ZeugswetterA@spardat.at>)
Список pgsql-performance
On Thu, Sep 29, 2005 at 03:28:27PM +0200, Zeugswetter Andreas DAZ SD wrote:
>
> > In my original example, a sequential scan of the 1TB of 2KB
> > or 4KB records, => 250M or 500M records of data, being sorted
> > on a binary value key will take ~1000x more time than reading
> > in the ~1GB Btree I described that used a Key+RID (plus node
> > pointers) representation of the data.
>
> Imho you seem to ignore the final step your algorithm needs of
> collecting the
> data rows. After you sorted the keys the collect step will effectively
> access the
> tuples in random order (given a sufficiently large key range).
>
> This random access is bad. It effectively allows a competing algorithm
> to read the
> whole data at least 40 times sequentially, or write the set 20 times
> sequentially.
> (Those are the random/sequential ratios of modern discs)

True, but there is a compromise... not shuffling full tuples around when
sorting in memory. Do your sorting with pointers, then write the full
tuples out to 'tape' if needed.

Of course the other issue here is that as correlation improves it
becomes better and better to do full pointer-based sorting.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

В списке pgsql-performance по дате отправления:

Предыдущее
От: mark@mark.mielke.cc
Дата:
Сообщение: Re: count(*) using index scan in "query often, update rarely" environment
Следующее
От: "Announce"
Дата:
Сообщение: What's the cost of a few extra columns?