Обсуждение: dell versus hp
Hello. We are planning to move from MS SQL Server to PostgreSQL for our production system. Bot read and write performance are equally important. Writing is the bottleneck of our current MS SQL Server system. All of our existing servers are from Dell, but I want to look at some other options as well. We are currently looking at rack boxes with 8 internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 in raid 10 for the base. Here are our current alternatives: 1) Dell 2900 (5U) 8 * 146 GB SAS 15Krpm 3,5" 8GB ram Perc 5/i. battery backup. 256MB ram. 2 * 4 Xeon 2,66GHz 2) Dell 2950 (2U) 8 * 146 GB SAS 10Krpm 2,5" (not really selectable, but I think the webshop is wrong..) 8GB ram Perc 5/i. battery backup. 256MB ram. 2 * 4 Xeon 2,66GHz 3) HP ProLiant DL380 G5 (2U) 8 * 146 GB SAS 10Krpm 2,5" 8GB ram P400 raid controller. battery backup. 512MB ram. 2 * 2 Xeon 3GHz All of those alternatives cost ca the same. How much (in numbers) better are 15K 3,5" than 10K 2,5"? What about the raid controllers? Any other alternatives in that price-range? Regards, - Tore.
> All of our existing servers are from Dell, but I want to look at some > other options as well. We are currently looking at rack boxes with 8 > internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 in > raid 10 for the base. > > Here are our current alternatives: > > 1) Dell 2900 (5U) > 8 * 146 GB SAS 15Krpm 3,5" > 8GB ram > Perc 5/i. battery backup. 256MB ram. > 2 * 4 Xeon 2,66GHz > > 2) Dell 2950 (2U) > 8 * 146 GB SAS 10Krpm 2,5" (not really selectable, but I think the > webshop is wrong..) > 8GB ram > Perc 5/i. battery backup. 256MB ram. > 2 * 4 Xeon 2,66GHz > > 3) HP ProLiant DL380 G5 (2U) > 8 * 146 GB SAS 10Krpm 2,5" > 8GB ram > P400 raid controller. battery backup. 512MB ram. > 2 * 2 Xeon 3GHz > > All of those alternatives cost ca the same. How much (in numbers) > better are 15K 3,5" than 10K 2,5"? What about the raid controllers? > Any other alternatives in that price-range? When writing is important you want to use 15K rpm disks. I personally use the DL380 and is very satisfied with the hardware and the buildin ciss-controller (with 256 MB cache using 10K rpm disks). How much space do you need? 72 GB is the largest 15K 2.5" sas-disk from HP. -- regards Claus When lenity and cruelty play for a kingdom, the gentlest gamester is the soonest winner. Shakespeare
Hi List, Le mardi 06 novembre 2007, Tore Halset a écrit : > 1) Dell 2900 (5U) > 8 * 146 GB SAS 15Krpm 3,5" > 8GB ram > Perc 5/i. battery backup. 256MB ram. > 2 * 4 Xeon 2,66GHz In fact you can add 2 hot-plug disks on this setup, connected to the frontpane. We've bought this very same model with 10 15 rpm disks some weeks ago, and it reached production last week. So we have 2 OS raid1 disk (with /var/backups and /var/log --- pg_log), 2 raid1 disk for WAL and 6 disks in a RAID10, the 3 raids managed by the included Perc raid controller. So far so good! Some knowing-better-than-me people on #postgresql had the remark that depending on the write transaction volumes (40 to 60 percent of my tps, but no so much for this hardware), I could somewhat benefit in setting the WAL on the OS raid1, and having 8 raid10 disks for data... which I'll consider for another project. Hope this helps, -- dim
Вложения
Tore, * Tore Halset (halset@pvv.ntnu.no) wrote: > All of our existing servers are from Dell, but I want to look at some other > options as well. We are currently looking at rack boxes with 8 internal SAS > discs. Two mirrored for OS, Two mirrored for WAL and 4 in raid 10 for the > base. I'm a big HP fan, personally. Rather than talking about the hardware for a minute though, I'd suggest you check out what's happening for 8.3. Here's a pretty good writeup by Greg Smith on it: http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm Hopefully it'll help w/ whatever hardware you end up going with. Enjoy, Stephen
Вложения
On Nov 6, 2007, at 12:53 , Dimitri Fontaine wrote: > Le mardi 06 novembre 2007, Tore Halset a écrit : >> 1) Dell 2900 (5U) >> 8 * 146 GB SAS 15Krpm 3,5" >> 8GB ram >> Perc 5/i. battery backup. 256MB ram. >> 2 * 4 Xeon 2,66GHz > > In fact you can add 2 hot-plug disks on this setup, connected to the > frontpane. We've bought this very same model with 10 15 rpm disks > some weeks > ago, and it reached production last week. > > So we have 2 OS raid1 disk (with /var/backups and /var/log --- > pg_log), 2 > raid1 disk for WAL and 6 disks in a RAID10, the 3 raids managed by the > included Perc raid controller. So far so good! Interesting. Do you have any benchmarking numbers? Did you test with software raid 10 as well? Regards, - Tore.
On Nov 6, 2007, at 12:36 , Claus Guttesen wrote: >> All of our existing servers are from Dell, but I want to look at some >> other options as well. We are currently looking at rack boxes with 8 >> internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 >> in >> raid 10 for the base. >> >> Here are our current alternatives: >> >> 1) Dell 2900 (5U) >> 8 * 146 GB SAS 15Krpm 3,5" >> 8GB ram >> Perc 5/i. battery backup. 256MB ram. >> 2 * 4 Xeon 2,66GHz >> >> 2) Dell 2950 (2U) >> 8 * 146 GB SAS 10Krpm 2,5" (not really selectable, but I think the >> webshop is wrong..) >> 8GB ram >> Perc 5/i. battery backup. 256MB ram. >> 2 * 4 Xeon 2,66GHz >> >> 3) HP ProLiant DL380 G5 (2U) >> 8 * 146 GB SAS 10Krpm 2,5" >> 8GB ram >> P400 raid controller. battery backup. 512MB ram. >> 2 * 2 Xeon 3GHz >> >> All of those alternatives cost ca the same. How much (in numbers) >> better are 15K 3,5" than 10K 2,5"? What about the raid controllers? >> Any other alternatives in that price-range? > > When writing is important you want to use 15K rpm disks. I personally > use the DL380 and is very satisfied with the hardware and the buildin > ciss-controller (with 256 MB cache using 10K rpm disks). > > How much space do you need? 72 GB is the largest 15K 2.5" sas-disk > from HP. Okay, thanks. We need 100GB for the database, so 4 72GB in raid 10 will be fine. Regards, - Tore.
Le mardi 06 novembre 2007, Tore Halset a écrit : > Interesting. Do you have any benchmarking numbers? Did you test with > software raid 10 as well? Just some basic pg_restore figures, which only make sense (for me anyway) when compared to restoring same data on other machines, and to show the effect of having a dedicated array for the WALs (fsync off not having that an influence on the pg_restore timing)... The previous production server had a RAM default and made us switch without taking the time for all the tests we could have run on the new "beast". Regards, -- dim
Вложения
On Tue, 6 Nov 2007, Dimitri Fontaine wrote: > Some knowing-better-than-me people on #postgresql had the remark that > depending on the write transaction volumes (40 to 60 percent of my tps, but > no so much for this hardware), I could somewhat benefit in setting the WAL on > the OS raid1, and having 8 raid10 disks for data That really depends on the write volume to the OS drive. If there's lots of writes there for things like logs and temporary files, the disruption to the WAL writes could be a problem. Part of the benefit of having a separate WAL disk is that the drive never has to seek somewhere to write anything else. Now, if instead you considered putting the WAL onto the database disks and adding more disks to the array, that might work well. You'd also be losing something because the WAL writes may have to wait behind seeks elsewhere. But once you have enough disks in an array to spread all the load over that itself may improve write throughput enough to still be a net improvement. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
On Nov 6, 2007, at 5:12 AM, Tore Halset wrote: > Here are our current alternatives: Two things I recommend. If the drives are made by western digital, run away. If the PERC5/i is an Adaptec card, run away. Max out your cache RAM on the RAID card. 256 is the minimum when you have such big data sets that need the big disks you're looking at.
On Nov 6, 2007, at 1:10 PM, Greg Smith wrote: > elsewhere. But once you have enough disks in an array to spread all > the load over that itself may improve write throughput enough to > still be a net improvement. This has been my expeience with 14+ disks in an array (both RAID10 and RAID5). The difference is barely noticeable.
On Nov 8, 2007 10:43 AM, Vivek Khera <khera@kcilink.com> wrote: > > On Nov 6, 2007, at 1:10 PM, Greg Smith wrote: > > > elsewhere. But once you have enough disks in an array to spread all > > the load over that itself may improve write throughput enough to > > still be a net improvement. > > This has been my expeience with 14+ disks in an array (both RAID10 and > RAID5). The difference is barely noticeable. Mine too. I would suggest though, that by the time you get to 14 disks, you switch from RAID-5 to RAID-6 so you have double redundancy. Performance of a degraded array is better in RAID6 than RAID5, and you can run your rebuilds much slower since you're still redundant. > If the PERC5/i is an Adaptec card, run away. I've heard the newest adaptecs, even the perc implementations aren't bad. Of course, that doesn't mean I'm gonna use one, but who knows? They might have made a decent card after all.
Le Thursday 08 November 2007 19:22:48 Scott Marlowe, vous avez écrit : > On Nov 8, 2007 10:43 AM, Vivek Khera <khera@kcilink.com> wrote: > > On Nov 6, 2007, at 1:10 PM, Greg Smith wrote: > > > elsewhere. But once you have enough disks in an array to spread all > > > the load over that itself may improve write throughput enough to > > > still be a net improvement. > > > > This has been my expeience with 14+ disks in an array (both RAID10 and > > RAID5). The difference is barely noticeable. > > Mine too. May we conclude from this that mixing WAL and data onto the same array is a good idea starting at 14 spindles? The Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS (raid1) and 8 for mixing WAL and data... not enough to benefit from the move, or still to test? > I would suggest though, that by the time you get to 14 > disks, you switch from RAID-5 to RAID-6 so you have double redundancy. > Performance of a degraded array is better in RAID6 than RAID5, and > you can run your rebuilds much slower since you're still redundant. Is raid6 better than raid10 in term of overall performances, or a better cut when you need capacity more than throughput? Thanks for sharing the knowlegde, regards, -- dim
>>> On Thu, Nov 8, 2007 at 2:14 PM, in message <200711082114.36788.dfontaine@hi-media.com>, Dimitri Fontaine <dfontaine@hi-media.com> wrote: > The Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS > (raid1) and 8 for mixing WAL and data... not enough to benefit from the > move, > or still to test? From our testing and various posts on the performance list, you can expect a good battery backed caching RAID controller will probably eliminate most of the performance difference between separate WAL drives and leaving them on the same RAID array with the rest of the database. See, for example: http://archives.postgresql.org/pgsql-performance/2007-02/msg00026.php Ben found a difference of "a few percent"; I remember seeing a post from someone who did a lot of testing and found a difference of 1%. As stated in the above referenced posting, it will depend on your workload (and your hardware) so it is best if you can do some realistic tests. -Kevin
On Thursday 08 November 2007, Dimitri Fontaine <dfontaine@hi-media.com> > Is raid6 better than raid10 in term of overall performances, or a better > cut when you need capacity more than throughput? You can't touch RAID 10 for performance or reliability. The only reason to use RAID 5 or RAID 6 is to get more capacity out of the same drives. -- Alan
On Nov 8, 2007, at 1:22 PM, Scott Marlowe wrote: > I've heard the newest adaptecs, even the perc implementations aren't > bad. I have a pair of Adaptec 2230SLP cards. Worst. Just replaced them on Tuesday with fibre channel cards connected to external RAID enclosures. Much nicer.
On Nov 8, 2007 2:56 PM, Alan Hodgson <ahodgson@simkin.ca> wrote: > On Thursday 08 November 2007, Dimitri Fontaine <dfontaine@hi-media.com> > > Is raid6 better than raid10 in term of overall performances, or a better > > cut when you need capacity more than throughput? > > You can't touch RAID 10 for performance or reliability. The only reason to > use RAID 5 or RAID 6 is to get more capacity out of the same drives. Actually, RAID6 is about the same on reliability, since it has double parity and theoretically ANY TWO disks could fail, and RAID6 will still have all your data. If the right two disks fail in a RAID-10 you lose everything. Admittedly, that's a pretty remote possibility, but so it three drives failing at once in a RAID-6. For performance RAID-10 is still pretty much the best choice.
* Scott Marlowe: > If the right two disks fail in a RAID-10 you lose everything. > Admittedly, that's a pretty remote possibility, It's not, unless you carefully layout the RAID-1 subunits so that their drives aren't physically adjacent. 8-/ I don't think many controllers support that. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
Apart from the disks, you might also investigate using Opterons instead of Xeons. there appears to be some significant dent in performance between Opteron and Xeon. Xeons appear to spend more time in passing around ownership of memory cache lines in case of a spinlock. It's not yet clear whether or not here has been worked around the issue. You should at least investigate it a bit. We're using a HP DL385 ourselves which performs quite well. -R- Tore Halset wrote: > Hello. > 1) Dell 2900 (5U) > 8 * 146 GB SAS 15Krpm 3,5" > 8GB ram > Perc 5/i. battery backup. 256MB ram. > 2 * 4 Xeon 2,66GHz > > 2) Dell 2950 (2U) > 8 * 146 GB SAS 10Krpm 2,5" (not really selectable, but I think the > webshop is wrong..) > 8GB ram > Perc 5/i. battery backup. 256MB ram. > 2 * 4 Xeon 2,66GHz > > 3) HP ProLiant DL380 G5 (2U) > 8 * 146 GB SAS 10Krpm 2,5" > 8GB ram > P400 raid controller. battery backup. 512MB ram. > 2 * 2 Xeon 3GHz >
> Apart from the disks, you might also investigate using Opterons instead > of Xeons. there appears to be some significant dent in performance > between Opteron and Xeon. Xeons appear to spend more time in passing > around ownership of memory cache lines in case of a spinlock. > It's not yet clear whether or not here has been worked around the issue. > You should at least investigate it a bit. > > We're using a HP DL385 ourselves which performs quite well. Not atm. Until new benchmarks are published comparing AMD's new quad-core with Intel's ditto, Intel has the edge. http://tweakers.net/reviews/657/6 -- regards Claus When lenity and cruelty play for a kingdom, the gentlest gamester is the soonest winner. Shakespeare
On Nov 9, 2007 10:40 AM, Claus Guttesen <kometen@gmail.com> wrote: > > Apart from the disks, you might also investigate using Opterons instead > > of Xeons. there appears to be some significant dent in performance > > between Opteron and Xeon. Xeons appear to spend more time in passing > > around ownership of memory cache lines in case of a spinlock. > > It's not yet clear whether or not here has been worked around the issue. > > You should at least investigate it a bit. > > > > We're using a HP DL385 ourselves which performs quite well. > > Not atm. Until new benchmarks are published comparing AMD's new > quad-core with Intel's ditto, Intel has the edge. > > http://tweakers.net/reviews/657/6 For 8 cores, it appears AMD has the lead, read this (stolen from another thread): http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf
On Fri, 9 Nov 2007, Scott Marlowe wrote: >> Not atm. Until new benchmarks are published comparing AMD's new >> quad-core with Intel's ditto, Intel has the edge. >> http://tweakers.net/reviews/657/6 > > For 8 cores, it appears AMD has the lead, read this (stolen from > another thread): > http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf This issue isn't simple, and it may be the case that both conclusions are correct in their domain but testing slightly different things. The sysbench test used by the FreeBSD benchmark is a much simpler than what the tweakers.net benchmark simulates. Current generation AMD and Intel processors are pretty close in performance, but guessing which will work better involves a complicated mix of both CPU and memory issues. AMD's NUMA architecture does some things better, and Intel's memory access takes a second hit in designs that use FB-DIMMs. But Intel has enough of an advantage on actual CPU performance and CPU caching that current designs are usually faster regardless. For an interesting look at the low-level details here, the current mainstream parts are compared at http://techreport.com/articles.x/11443/13 and a similar comparison for the just released quad-core Opterons is at http://techreport.com/articles.x/13176/12 Nowadays Intel vs. AMD is tight enough that I don't even worry about that part in the context of a database application (there was still a moderate gap when the Tweakers results were produced a year ago). On a real server, I'd suggest being more worried about how good the disk controller is, what the expansion options are there, and relative $/core. In the x86/x64 realm, I don't feel CPU architecture is a huge issue right now when you're running a database. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
On Nov 8, 2007, at 3:56 PM, Alan Hodgson wrote: > You can't touch RAID 10 for performance or reliability. The only > reason to > use RAID 5 or RAID 6 is to get more capacity out of the same drives. Maybe you can't, but I can. I guess I have better toys than you :-)
On November 9, 2007, Vivek Khera <khera@kcilink.com> wrote: > On Nov 8, 2007, at 3:56 PM, Alan Hodgson wrote: > > You can't touch RAID 10 for performance or reliability. The only > > reason to > > use RAID 5 or RAID 6 is to get more capacity out of the same > > drives. > > Maybe you can't, but I can. I guess I have better toys than you :-) > OK, I'll bite. Name one RAID controller that gives better write performance in RAID 6 than it does in RAID 10, and post the benchmarks. I'll grant a theoretical reliability edge to RAID 6 (although actual implementations are a lot more iffy), but not performance. -- The ethanol craze means that we're going to burn up the Midwest's last six inches of topsoil in our gas-tanks.
On Tue, 13 Nov 2007, Alan Hodgson wrote: > OK, I'll bite. Name one RAID controller that gives better write > performance in RAID 6 than it does in RAID 10, and post the benchmarks. > > I'll grant a theoretical reliability edge to RAID 6 (although actual > implementations are a lot more iffy), but not performance. Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 vs 8 drive RAID10, but I don't have those bonnie results any longer. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 438570 31 912.2 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59 Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 510904 36 607.8 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12 -- Jeff Frost, Owner <jeff@frostconsultingllc.com> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954
On Nov 8, 2007 1:22 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > Mine too. I would suggest though, that by the time you get to 14 > disks, you switch from RAID-5 to RAID-6 so you have double redundancy. > Performance of a degraded array is better in RAID6 than RAID5, and > you can run your rebuilds much slower since you're still redundant. > couple of remarks here: * personally im not a believer in raid 6, it seems to hurt random write performance which is already a problem with raid 5...I prefer the hot spare route, or raid 10. * the perc 5 sas controller is rebranded lsi megaraid controller with some custom firmware tweaks. for example, the perc 5/e is a rebranded 8408 megaraid iirc. * perc 5 controllers are decent if unspectacular. good raid 5 performance, average raid 10. * to the OP, the 15k solution (dell 2900) will likely perform the best, if you don't mind the rack space. * again the op, you can possibly consider combining the o/s and the wal volumes (2xraid 1 + 6xraid 10) combining the o/s and wal volumes can sometimes also be a win, but doesn't sound likely in your case. merlin merlin
On Tuesday 13 November 2007, Jeff Frost <jeff@frostconsultingllc.com> wrote: > Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 > vs 8 drive RAID10, but I don't have those bonnie results any longer. > > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 > 438570 31 912.2 1 ------Sequential Create------ --------Random > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59 > > > Version 1.03 ------Sequential Output------ --Sequential Input- > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 > 510904 36 607.8 0 ------Sequential Create------ --------Random > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12 OK, impressive RAID-6 performance (not so impressive RAID-10 performance, but that could be a filesystem issue). Note to self; try an Areca controller in next storage server. thanks. -- The global consumer economy can best be described as the most efficient way to convert natural resources into waste.
On Wed, 14 Nov 2007, Alan Hodgson wrote: > On Tuesday 13 November 2007, Jeff Frost <jeff@frostconsultingllc.com> wrote: >> Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 >> vs 8 drive RAID10, but I don't have those bonnie results any longer. >> >> Version 1.03 ------Sequential Output------ --Sequential Input- >> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >> /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 >> 438570 31 912.2 1 ------Sequential Create------ --------Random >> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- >> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec >> %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59 >> >> >> Version 1.03 ------Sequential Output------ --Sequential Input- >> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >> /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 >> 510904 36 607.8 0 ------Sequential Create------ --------Random >> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- >> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec >> %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12 > > OK, impressive RAID-6 performance (not so impressive RAID-10 performance, > but that could be a filesystem issue). Note to self; try an Areca > controller in next storage server. I believe these were both on ext3. I thought I had some XFS results available for comparison, but I couldn't find them. -- Jeff Frost, Owner <jeff@frostconsultingllc.com> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954
On Wednesday 14 November 2007, Jeff Frost <jeff@frostconsultingllc.com> wrote: > > OK, impressive RAID-6 performance (not so impressive RAID-10 > > performance, but that could be a filesystem issue). Note to self; try > > an Areca controller in next storage server. > > I believe these were both on ext3. I thought I had some XFS results > available for comparison, but I couldn't find them. Yeah I've seen ext3 write performance issues on RAID-10. XFS is much better. -- Q: Why did God create economists? A: In order to make weather forecasters look good.
On Nov 14, 2007 5:24 PM, Alan Hodgson <ahodgson@simkin.ca> wrote: > On Tuesday 13 November 2007, Jeff Frost <jeff@frostconsultingllc.com> wrote: > > Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 > > vs 8 drive RAID10, but I don't have those bonnie results any longer. > > > > Version 1.03 ------Sequential Output------ --Sequential Input- > > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > > /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 > > 438570 31 912.2 1 ------Sequential Create------ --------Random > > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- > > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > > %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59 > > > > > > Version 1.03 ------Sequential Output------ --Sequential Input- > > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > > /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 > > 510904 36 607.8 0 ------Sequential Create------ --------Random > > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- > > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec > > %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12 > > OK, impressive RAID-6 performance (not so impressive RAID-10 performance, > but that could be a filesystem issue). Note to self; try an Areca > controller in next storage server. 607 seeks/sec on a 8 drive raid 10 is terrible...this is not as dependant on filesystem as sequential performance... merlin
On Wed, 14 Nov 2007, Merlin Moncure wrote: > On Nov 14, 2007 5:24 PM, Alan Hodgson <ahodgson@simkin.ca> wrote: >> On Tuesday 13 November 2007, Jeff Frost <jeff@frostconsultingllc.com> wrote: >>> Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 >>> vs 8 drive RAID10, but I don't have those bonnie results any longer. >>> >>> Version 1.03 ------Sequential Output------ --Sequential Input- >>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >>> /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 >>> 438570 31 912.2 1 ------Sequential Create------ --------Random >>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- >>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec >>> %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59 >>> >>> >>> Version 1.03 ------Sequential Output------ --Sequential Input- >>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP >>> /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 >>> 510904 36 607.8 0 ------Sequential Create------ --------Random >>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- >>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec >>> %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12 >> >> OK, impressive RAID-6 performance (not so impressive RAID-10 performance, >> but that could be a filesystem issue). Note to self; try an Areca >> controller in next storage server. > > 607 seeks/sec on a 8 drive raid 10 is terrible...this is not as > dependant on filesystem as sequential performance... Then this must be horrible since it's a 14 drive raid 10. :-/ If we had more time for the testing, I would have tried a bunch of RAID1 volumes and used software RAID0 to add the +0 bit and see how that performed. Merlin, what sort of seeks/sec from bonnie++ do you normally see from your RAID10 volumes? On an 8xRAID10 volume with the smaller Areca controller we were seeing around 450 seeks/sec. -- Jeff Frost, Owner <jeff@frostconsultingllc.com> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954
On Nov 14, 2007, at 9:19 PM, Jeff Frost wrote: > > On an 8xRAID10 volume with the smaller Areca controller we were > seeing around 450 seeks/sec. > On our 6 disk raid10 on a 3ware 9550sx I'm able to get about 120 seek + reads/sec per process, with an aggregate up to about 500 or so. The disks are rather pooey 7.5k sata2 disks. I'd been having perf issues and I'd been wondering why my IO stats were low.. turns out it was going as fast as the disks or controller could go. I even went so far as to write a small tool to sort-of simulate a PG index scan to remove all that from the question. It proved my theory - seq performance was murdering us. This information led me to spend a pile of money on an MSA70 (HP) and a pile of 15k SAS disks. While significantly more expensive, the perf gains were astounding. I have 8 disks in a raid6 (iirc, I had comprable numbers for R10, but the space/cost/performance wasn't worth it). I'm able to get about 350-400tps, per process, with an aggregate somewhere in the 1000s. (I drove it up to 2000 before stopping during testing) Wether the problem is the controller or the disks, I don't know. I just know what my numbers tell me. (And the day we went live on the MSA a large number of our perf issues went away. Although, now that the IO was plenty sufficient the CPU became the bottleneck! Its always something!) The sata array performs remarkably well for a sequential read though. Given our workload, we need the random perf much more than seq, but I can see the opposite being true in a warehouse workload. btw, the tool I wrote is here http://pgfoundry.org/projects/pgiosim/ -- Jeff Trout <jeff@jefftrout.com> http://www.dellsmartexitin.com/ http://www.stuarthamm.net/
On Nov 14, 2007, at 5:36 PM, Jeff Frost wrote: > > I believe these were both on ext3. I thought I had some XFS results > available for comparison, but I couldn't find them. You'd see similar with the UFS2 file system on FreeBSD.