Re: H800 + md1200 Performance problem

Поиск
Список
Период
Сортировка
От Cesar Martin
Тема Re: H800 + md1200 Performance problem
Дата
Msg-id CAMAsR=5OupDm9CoZE8ZiURSs+F6oYpHcQPwkUcvNvrdpqSJ=GA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: H800 + md1200 Performance problem  (Cesar Martin <cmartinp@gmail.com>)
Ответы Re: H800 + md1200 Performance problem
Список pgsql-performance
Hi,

Finally the problem was BIOS configuration. DBPM had was set to "Active Power Controller" I changed this to "Max Performance". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx
Now wirite speed are 550MB/s and read 1,1GB/s.

Thank you all for your advice.

El 9 de abril de 2012 18:24, Cesar Martin <cmartinp@gmail.com> escribió:
Hi,

Today I'm doing new benchmarks with RA, NORA, WB and WT in the controller:

With NORA
-----------------
dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 318,306 s, 432 MB/s

With RA
------------
dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 179,712 s, 765 MB/s
dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 202,948 s, 677 MB/s
dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 213,157 s, 645 MB/s

With Adaptative RA
-----------------
[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 169,533 s, 811 MB/s
[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 207,223 s, 663 MB/s

It's very strange the differences between the same test under same conditions... It seems thah adaptative read ahead is the best solution.

For write test, I apply tuned-adm throughput-performance, that change IO elevator to deadline and grow up vm.dirty_ratio to 40.... ?¿?¿?

With WB
-------------
dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 539,041 s, 255 MB/s
dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 505,695 s, 272 MB/s

Enforce WB
-----------------
dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 662,538 s, 207 MB/s

With WT
--------------
dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 750,615 s, 183 MB/s

I think that this results are more logical... WT results in bad performance and differences, inside the same test, are minimum.

Later I have put pair of dd at same time: 

dd if=/dev/zero of=/vol02/bonnie/DD2 bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 633,613 s, 217 MB/s
dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384
16384+0 records in
16384+0 records out
137438953472 bytes (137 GB) copied, 732,759 s, 188 MB/s

Is very strange, that with parallel DD I take 400MBps. It's like if Centos have limit in IO throughput of a process...


El 5 de abril de 2012 22:06, Tomas Vondra <tv@fuzzy.cz> escribió:

On 5.4.2012 20:43, Merlin Moncure wrote:
> The original problem is read based performance issue though and this
> will not have any affect on that whatsoever (although it's still
> excellent advice).  Also dd should bypass the o/s buffer cache.  I
> still pretty much convinced that there is a fundamental performance
> issue with the raid card dell needs to explain.

Well, there are two issues IMHO.

1) Read performance that's not exactly as good as one'd expect from a
  12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually
  give like 170 MB/s for sequential reads/writes. I'd definitely
  expect more than 533 MB/s when reading the data. At least something
  near 1GB/s (equal to 6 drives).

  Hmm, the dd read performance seems to grow over time - I wonder if
  this is the issue with adaptive read policy, as mentioned in the
  xbitlabs report.

  Cesar, can you set the read policy to a 'read ahead'

    megacli -LDSetProp RA -LALL -aALL

  or maybe 'no read-ahead'

    megacli -LDSetProp NORA -LALL -aALL

  It's worth a try, maybe it somehow conflicts with the way kernel
  handles read-ahead or something. I find these adaptive heuristics
  a bit unpredictable ...

  Another thing - I see the patrol reads are enabled. Can you disable
  that and try how that affects the performance?

2) Write performance behaviour, that's much more suspicious ...

  Not sure if it's related to the read performance issues.

Tomas

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
César Martín Pérez
cmartinp@gmail.com




--
César Martín Pérez
cmartinp@gmail.com

В списке pgsql-performance по дате отправления:

Предыдущее
От: Florent Guillaume
Дата:
Сообщение: Re: Slow fulltext query plan
Следующее
От: Eyal Wilde
Дата:
Сообщение: Re: scale up (postgresql vs mssql)