Re: Huge Data sets, simple queries

Поиск
Список
Период
Сортировка
От Jim C. Nasby
Тема Re: Huge Data sets, simple queries
Дата
Msg-id 20060131231227.GP95850@pervasive.com
обсуждение исходный текст
Ответ на Re: Huge Data sets, simple queries  ("Luke Lonergan" <llonergan@greenplum.com>)
Ответы Re: Huge Data sets, simple queries  ("Luke Lonergan" <llonergan@greenplum.com>)
Список pgsql-performance
On Tue, Jan 31, 2006 at 02:52:57PM -0800, Luke Lonergan wrote:
> It's because your alternating reads are skipping in chunks across the
> platter.  Disks work at their max internal rate when reading sequential
> data, and the cache is often built to buffer a track-at-a-time, so
> alternating pieces that are not contiguous has the effect of halving the max
> internal sustained bandwidth of each drive - the total is equal to one
> drive's sustained internal bandwidth.
>
> This works differently for RAID0, where the chunks are allocated to each
> drive and laid down contiguously on each, so that when they're read back,
> each drive runs at it's sustained sequential throughput.
>
> The alternating technique in mirroring might improve rotational latency for
> random seeking - a trick that Tandem exploited, but it won't improve
> bandwidth.

Or just work in multiples of tracks, which would greatly reduce the
impact of delays from seeking.

> > As for software raid, I'm wondering how well that works if you can't use
> > a BBU to allow write caching/re-ordering...
>
> Works great with standard OS write caching.

Well, the only problem with that is if the machine crashes for any
reason you risk having the database corrupted (or at best losing some
committed transactions).
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

В списке pgsql-performance по дате отправления:

Предыдущее
От: PFC
Дата:
Сообщение: Re: Huge Data sets, simple queries
Следующее
От: "Luke Lonergan"
Дата:
Сообщение: Re: Huge Data sets, simple queries