8K recordsize bad on ZFS?

Поиск
Список
Период
Сортировка
От Josh Berkus
Тема 8K recordsize bad on ZFS?
Дата
Msg-id 4BE4ABC9.6040106@agliodbs.com
обсуждение исходный текст
Ответы Re: 8K recordsize bad on ZFS?  (Dimitri <dimitrik.fr@gmail.com>)
Re: 8K recordsize bad on ZFS?  (Jignesh Shah <jkshah@gmail.com>)
Список pgsql-performance
Jignesh, All:

Most of our Solaris users have been, I think, following Jignesh's advice
from his benchmark tests to set ZFS page size to 8K for the data zpool.
 However, I've discovered that this is sometimes a serious problem for
some hardware.

For example, having the recordsize set to 8K on a Sun 4170 with 8 drives
recently gave me these appalling Bonnie++ results:

Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
db111           24G           260044  33 62110  17           89914  15
1167  25
Latency                        6549ms    4882ms              3395ms
107ms

I know that's hard to read.  What it's saying is:

Seq Writes: 260mb/s combined
Seq Reads: 89mb/s combined
Read Latency: 3.3s

Best guess is that this is a result of overloading the array/drives with
commands for all those small blocks; certainly the behavior observed
(stuttering I/O, latency) is in line with that issue.

Anyway, since this is a DW-like workload, we just bumped the recordsize
up to 128K and the performance issues went away ... reads up over 300mb/s.

--
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

В списке pgsql-performance по дате отправления:

Предыдущее
От: Craig James
Дата:
Сообщение: Dell Perc HX00 RAID controllers: What's inside?
Следующее
От: Andy Colson
Дата:
Сообщение: Re: Slow Bulk Delete