Re: Filesystem benchmarking for pg 8.3.3 server

Поиск
Список
Период
Сортировка
От Henrik
Тема Re: Filesystem benchmarking for pg 8.3.3 server
Дата
Msg-id AC012133-6932-46DD-A723-B2DCF98E5380@mac.se
обсуждение исходный текст
Ответ на Re: Filesystem benchmarking for pg 8.3.3 server  (david@lang.hm)
Ответы Re: Filesystem benchmarking for pg 8.3.3 server  (Jeff <threshar@torgo.978.org>)
Список pgsql-performance
OK, changed the SAS RAID 10 to RAID 5 and now my random writes are
handing 112 MB/ sek. So it is almsot twice as fast as the RAID10 with
the same disks. Any ideas why?

Is the iozone tests faulty?

What is your suggestions? Trust the IOZone tests and use RAID5 instead
of RAID10, or go for RAID10 as it should be faster and will be more
suited when we add more disks in the future?

I'm a little confused by the benchmarks.

This is from the RAID5 tests on 4 SAS 15K drives...

iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u -F /database/iotest

    Children see throughput for 1 random writers     =  112074.58 KB/sec
    Parent sees throughput for 1 random writers     =  111962.80 KB/sec
    Min throughput per process             =  112074.58 KB/sec
    Max throughput per process             =  112074.58 KB/sec
    Avg throughput per process             =  112074.58 KB/sec
    Min xfer                     = 1024000.00 KB
    CPU utilization: Wall time    9.137    CPU time    0.510    CPU
utilization   5.58 %




9 aug 2008 kl. 04.24 skrev david@lang.hm:

> On Fri, 8 Aug 2008, Henrik wrote:
>
>> But random writes should be faster on a RAID10 as it doesn't need
>> to calculate parity. That is why people suggest RAID 10 for
>> datases, correct?
>>
>> I can understand that RAID5 can be faster with sequential writes.
>
> the key word here is "can" be faster, it depends on the exact
> implementation, stripe size, OS caching, etc.
>
> the ideal situation would be that the OS would flush exactly one
> stripe of data at a time (aligned with the array) and no reads would
> need to be done, mearly calculate the parity info for the new data
> and write it all.
>
> the worst case is when the write size is small in relation to the
> stripe size and crosses the stripe boundry. In that case the system
> needs to read data from multiple stripes to calculate the new parity
> and write the data and parity data.
>
> I don't know any systems (software or hardware) that meet the ideal
> situation today.
>
> when comparing software and hardware raid, one other thing to
> remember is that CPU and I/O bandwidth that's used for software raid
> is not available to do other things.
>
> so a system that benchmarks much faster with software raid could end
> up being significantly slower in practice if it needs that CPU and I/
> O bandwidth for other purposes.
>
> examples could be needing the CPU/memory capacity to search through
> amounts of RAM once the data is retrieved from disk, or finding that
> you have enough network I/O that it combines with your disk I/O to
> saturate your system busses.
>
> David Lang
>
>
>> //Henke
>>
>> 8 aug 2008 kl. 16.53 skrev Luke Lonergan:
>>
>>> Your expected write speed on a 4 drive RAID10 is two drives worth,
>>> probably 160 MB/s, depending on the generation of drives.
>>> The expect write speed for a 6 drive RAID5 is 5 drives worth, or
>>> about 400 MB/s, sans the RAID5 parity overhead.
>>> - Luke
>>> ----- Original Message -----
>>> From: pgsql-performance-owner@postgresql.org <pgsql-performance-owner@postgresql.org
>>> >
>>> To: pgsql-performance@postgresql.org <pgsql-performance@postgresql.org
>>> >
>>> Sent: Fri Aug 08 10:23:55 2008
>>> Subject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server
>>> Hello list,
>>> I have a server with a direct attached storage containing 4 15k SAS
>>> drives and 6 standard SATA drives.
>>> The server is a quad core xeon with 16GB ram.
>>> Both server and DAS has dual PERC/6E raid controllers with 512 MB
>>> BBU
>>> There is 2 raid set configured.
>>> One RAID 10 containing 4 SAS disks
>>> One RAID 5 containing 6 SATA disks
>>> There is one partition per RAID set with ext2 filesystem.
>>> I ran the following iozone test which I stole from Joshua Drake's
>>> test
>>> at
>>>
http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/
>>> I ran this test against the RAID 5 SATA partition
>>> #iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u
>>> With these random write results
>>>
>>>       Children see throughput for 1 random writers    =  168647.33
>>> KB/sec
>>>       Parent sees throughput for 1 random writers     =  168413.61
>>> KB/sec
>>>       Min throughput per process                      =  168647.33
>>> KB/sec
>>>       Max throughput per process                      =  168647.33
>>> KB/sec
>>>       Avg throughput per process                      =  168647.33
>>> KB/sec
>>>       Min xfer                                        = 1024000.00
>>> KB
>>>       CPU utilization: Wall time    6.072    CPU time    0.540
>>> CPU
>>> utilization   8.89 %
>>> Almost 170 MB/sek. Not bad for 6 standard SATA drives.
>>> Then I ran the same thing against the RAID 10 SAS partition
>>>
>>>       Children see throughput for 1 random writers    =   68816.25
>>> KB/sec
>>>       Parent sees throughput for 1 random writers     =   68767.90
>>> KB/sec
>>>       Min throughput per process                      =   68816.25
>>> KB/sec
>>>       Max throughput per process                      =   68816.25
>>> KB/sec
>>>       Avg throughput per process                      =   68816.25
>>> KB/sec
>>>       Min xfer                                        = 1024000.00
>>> KB
>>>       CPU utilization: Wall time   14.880    CPU time    0.520
>>> CPU
>>> utilization   3.49 %
>>> What only 70 MB/sek?
>>> Is it possible that the 2 more spindles for the SATA drives makes
>>> that
>>> partition soooo much faster? Even though the disks and the RAID
>>> configuration should be slower?
>>> It feels like there is something fishy going on. Maybe the RAID 10
>>> implementation on the PERC/6e is crap?
>>> Any pointers, suggestion, ideas?
>>> I'm going to change the RAID 10 to a RAID 5 and test again and see
>>> what happens.
>>> Cheers,
>>> Henke
>>> --
>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
>>> )
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-performance
>>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance


В списке pgsql-performance по дате отправления:

Предыдущее
От: Jay
Дата:
Сообщение: Using PK value as a String
Следующее
От: Gregory Stark
Дата:
Сообщение: Re: Using PK value as a String