Re: Merge algorithms for large numbers of "tapes"
От | Hannu Krosing |
---|---|
Тема | Re: Merge algorithms for large numbers of "tapes" |
Дата | |
Msg-id | 1141893421.3810.5.camel@localhost.localdomain обсуждение исходный текст |
Ответ на | Re: Merge algorithms for large numbers of "tapes" ("Jim C. Nasby" <jnasby@pervasive.com>) |
Список | pgsql-hackers |
Ühel kenal päeval, K, 2006-03-08 kell 20:08, kirjutas Jim C. Nasby: > But it will take a whole lot of those rewinds to equal the amount of > time required by an additional pass through the data. I guess that missing a sector read also implies a "rewind", i.e. if you don't process the data read from a "tape" fast enough, you will have to wait a whole disc revolution (~== "seek time" on modern disks) before you get the next chunk of data. > I'll venture a > guess that as long as you've got enough memory to still read chunks back > in 8k blocks that it won't be possible for a multi-pass sort to > out-perform a one-pass sort. Especially if you also had the ability to > do pre-fetching (not something to fuss with now, but certainly a > possibility in the future). > > In any case, what we really need is at least good models backed by good > drive performance data. And filesystem performance data, as postgres uses OS-s native filesystems. -------------- Hannu
В списке pgsql-hackers по дате отправления: