Обсуждение: How to measure IO performance?

Поиск
Список
Период
Сортировка

How to measure IO performance?

От
Andre Brandt
Дата:
Hi out there,

I've some little questions, perhaps you can help me...

At the moment, we're planning our new clustered ERP system which
consists of a java application server and a postgresql database. The
hardware, which is actually used for that system isn't able to handle
the workload (2 Processors, load of 6-8 + 12GB Ram), so it is very, very
slow - and that although we already deactived a lot of stuff we normally
want to do, like a logging and something like that...
We've already choosen some hardware for the new cluster (2x quadcore
Xeon + 64GB Ram should handle that - also in case of  failover when one
server has to handle both, applicaton and database! The actual system
can't do that anymore...) but I also have to choose the filesystem
hardware. And that is a problem - we know, that the servers will be fast
enough, but we don't know, how many I/O performance is needed.
At the moment, we're using a scsi based shared storage (HP MSA500G2 -
which contains 10 disks for the database - 8xdata(raid 1+0)+2x
logs(raid1) ) and we often have a lot wait I/O when 200 concurrent users
are working... (when all features we need are activated, that wait I/O
will heavy increase, we think...)
So in order to get rid of wait I/O (as far as possible), we have to
increase the I/O performance. Because of there are a lot storage systems
out there, we need to know how many I/O's per second we actually need.
(To decide, whether a storage systems can handle our load or a bigger
system is required. )  Do you have some suggestions, how to measure that?
Do you have experience with postgres on something like HP MSA2000(10-20
disks) or RamSan systems?

Best regards,
Andre






























Re: How to measure IO performance?

От
"Scott Marlowe"
Дата:
On Tue, Sep 9, 2008 at 7:59 AM, Andre Brandt <brandt@decoit.de> wrote:
> Hi out there,
>
> I've some little questions, perhaps you can help me...
>
> So in order to get rid of wait I/O (as far as possible), we have to
> increase the I/O performance. Because of there are a lot storage systems
> out there, we need to know how many I/O's per second we actually need.
> (To decide, whether a storage systems can handle our load or a bigger
> system is required. )  Do you have some suggestions, how to measure that?
> Do you have experience with postgres on something like HP MSA2000(10-20
> disks) or RamSan systems?

Generally the best bang for the buck is with Direct Attached Storage
system with a high quality RAID controllers, like the 3Ware or Areca
or LSI or HPs 800 series.  I've heard a few good reports on higher end
adaptecs, but most adaptec RAID controllers are pretty poor db
performers.

To get an idea of how much I/O you'll need, you need to see how much
you use now.  A good way to do that is to come up with a realistic
benchmark and run it at a low level of concurrency on your current
system, while running iostat and / or vmstat in the background.
pidstat can be pretty useful too.  Run a LONG benchmark so it averages
out, you don't want to rely on a 5 minute benchmark.  Once you have
some base numbers, increase the scaling factor (i.e. number of threads
under test) and measure I/O and CPU etc for that test.

Now, figure out how high a load factor you'd need to run your full
load and multiply that times your 1x benchmark's I/O numbers, plus a
fudge factor or 2 to 10 times for overhead.

The standard way to hand more IO ops per second is to add spindles.
It might take more than one RAID controller or external RAID enclosure
to meet your needs.