Re: 3ware vs. MegaRAID

Поиск
Список
Период
Сортировка
От Dave Crooke
Тема Re: 3ware vs. MegaRAID
Дата
Msg-id x2jca24673e1004072044hac4475e8g3a3fefdcda6ebefe@mail.gmail.com
обсуждение исходный текст
Ответ на Re: 3ware vs. MegaRAID  (Scott Carey <scott@richrelevance.com>)
Ответы Re: 3ware vs. MegaRAID
Список pgsql-performance
For a card level RAID controller, I am a big fan of the LSI 8888, which is available in a PCIe riser form factor for blade / 1U servers, and comes with 0.5GB of battery backed cache. Full Linux support including mainline kernel drivers and command line config tools. Was using these with SAS expanders and 48x 1TB SATA-300 spindles per card, and it was pretty (adjective) quick for a card-based system ... comparable with a small FC-AL EMC Clariion CX3 series in fact, just without the redundancy.

My only gripe is that as of 18 months ago, it did not support triples (RAID-10 with 3 drives per set instead of 2) ... I had a "little knowledge is a dangerous thing" client who was stars-in-the-eyes sold on RAID-6 and so wanted double drive failure protection for everything (and didn't get my explanation about how archive logs on other LUNs make this OK, or why RAID-5/6 sucks for a database, or really listen to anything I said :-) ... It would do RAID-10 quads however (weird...).

Also decent in the Dell OEM'ed version (don't know the Dell PERC model number) though they tend to be a bit behind on firmware.

MegaCLI isn't the slickest tool, but you can find Nagios scripts for it online ... what's the problem? The Clariion will send you (and EMC support) an email if it loses a drive, but I'm not sure that's worth the 1500% price difference ;-)

Cheers
Dave

On Wed, Apr 7, 2010 at 10:29 PM, Scott Carey <scott@richrelevance.com> wrote:

On Apr 6, 2010, at 9:49 AM, Ireneusz Pluta wrote:

> Greg Smith pisze:
>>
>> The MegaRAID SAS 84* cards have worked extremely well for me in terms
>> of performance and features for all the systems I've seen them
>> installed in.  I'd consider it a modest upgrade from that 3ware card,
>> speed wise.
> OK, sounds promising.
>> The main issue with the MegaRAID cards is that you will have to write
>> a lot of your own custom scripts to monitor for failures using their
>> painful MegaCLI utility, and under FreeBSD that also requires using
>> their Linux utility via emulation:
>> http://www.freebsdsoftware.org/sysutils/linux-megacli.html
>>
> And this is what worries me, as I prefer not to play with utilities too
> much, but put the hardware into production, instead. So I'd like to find
> more precisely if expected speed boost would pay enough for that pain.
> Let me ask the following way then, if such a question makes much sense
> with the data I provide. I already have  another box with 3ware
> 9650SE-16ML. With the array configured as follows:
> RAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM,
> Xeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs,
> when testing with bonnie++ on idle machine, I got sequential block
> read/write around 320MB/290MB and random seeks around 660.
>
> Would that result be substantially better with LSI MegaRAID?
>

My experiences with the 3ware 9650 on linux are similar -- horribly slow for some reason with raid 10 on larger arrays.

Others have claimed this card performs well on FreeBSD, but the above looks just as bad as Linux.
660 iops is slow for 14 spindles of any type, although the raid 10 on might limit it to an effective 7 spindles on reading in which case its OK -- but should still top 100 iops per effective disk on 7200rpm drives unless the effective concurrency of the benchmark is low.  My experience with the 9650 was that iops was OK, but sequential performance for raid 10 was very poor.

On linux, I was able to get better sequential read performance like this:

* set it up as 3 raid 10 blocks, each 4 drives (2 others spare or for xlog or something).  Software RAID-0 these RAID 10 chunks together in the OS.
* Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.

A better raid card you should hit at minimum 800, if not 1000, MB/sec + depending on
whether you bottleneck on your PCIe or SATA ports or not.  I switched to two adaptec 5xx5 series cards (each with half the disks, software raid-0 between them) to get about 1200MB/sec max throughput and 2000iops from two sets of 10 Seagate STxxxxxxxNS 1TB drives.   That is still not as good as it should be, but much better.   FWIW, one set of 8 drives in raid 10 on the adaptec did about 750MB/sec sequential and ~950 iops read.  It required XFS to do this, ext3 was 20% slower in throughput.
A PERC 6 card (LSI MegaRaid clone) performed somewhere between the two.


I don't like bonnie++ much, its OK at single drive tests but not as good at larger arrays.  If you have time try fio, and create some custom profiles.
Lastly, for these sorts of tests partition your array in smaller chunks so that you can reliably test the front or back of the drive.  Sequential speed at the front of a typical 3.5" drive is about 2x as fast as at the end of the drive.

>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Carey
Дата:
Сообщение: Re: 3ware vs. MegaRAID
Следующее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Occasional giant spikes in CPU load