Re: how to configure my new server
От | scott.marlowe |
---|---|
Тема | Re: how to configure my new server |
Дата | |
Msg-id | Pine.LNX.4.33.0302071354460.14360-100000@css120.ihs.com обсуждение исходный текст |
Ответ на | Re: how to configure my new server (Andreas Pflug <Andreas.Pflug@web.de>) |
Ответы |
Re: how to configure my new server
(Andreas Pflug <Andreas.Pflug@web.de>)
|
Список | pgsql-performance |
On Fri, 7 Feb 2003, Andreas Pflug wrote: > scott.marlowe wrote: > > >I can get aggregate reads of about 48 Megs a second on a pair of 10k 18 > >gig UW scsi drives in RAID1 config. I'm not saying there's no room for > >improvement, but for what I use it for, it gives very good performance. > > > > > > > Scott, > > as most people talking about performance you mean throughput, but this > is not the most important parameter for databases. Reading the comments > of other users with software and IDE RAID, it seems to me that indeed > these solutions are only good at this discipline. Well, I have run bonnie across it and several other options as well, and the RAID cards I've test (Mega RAID 428 kinda stuff, i.e. 2 or 3 years old) were no better than Linux at any of the tests. In some cases much slower. > Another suggestion: > You're right, a hardware RAID controller is nothing but a stripped down > system that does noting more than a software RAID would do either. But > this tends to be the discussion that Intels plays for years now. There > were times when Intel said "don't need an intelligent graphics > controller, just use a fast processor". Well, development went another > direction, and it's good this way. Same with specialized controllers. > They will take burden from the central processing unit, which can > concentrate on the complicated things, not just getting some block from > disk. Look at most Intel based servers. Often, CPU Speed is less than > workstations CPUs, RAM technology one step behind. But they have > sophisticated infrastructure for coprocessing. This is the way to speed > things up, not pumping up the CPU. Hey, I was an Amiga owner, I'm all in favor of moving off the CPU that you can. But, that's only a win if you're on a machine that will be CPU/interrupt bound. If the machine sits at 99% idle with most of the waiting being I/O, and it has 4 CPUs anyway, then you may or may not gain from moving the work onto another card. while SSL et. al. encryption is CPU intensive, but generally the XOring needed to be done for RAID checksums is very simple to do quickly on modern architectures, so there's no great gain the that department. I'd imagine the big gain would come from on board battery backed up write through / or behind cache memory. I think the fastest solutions have always been the big outboard boxes with the RAID built in, and the PCI cards tend to be also rans in comparison. But the one point I'm sure we'll agree on in this is that until you test it with your workload, you won't really know which is better, if either. > If you got two of three HDs bad in a RAID5 array, you're lost. That's > the case for all RAID5 solutions, because the redundancy is just one > disk. Better solutions will allow for spare disks that jump in as soon > as one fails, hopefully it rebuilds before the next fails. The problem was that all three drives were good. He moved the server, cable came half off, the card marked the drives as bad, and wouldn't accept them back until it had formatted them. This wasn't the first time I'd seen this kind of problem with RAID controllers either, as it had happened to me in testing one a few years earlier. Which is one of the many life experiences that makes me like backups so much.
В списке pgsql-performance по дате отправления: