Re: driving postgres to achieve benchmark results similar to bonnie++

Поиск
Список
Период
Сортировка
От Mike Broers
Тема Re: driving postgres to achieve benchmark results similar to bonnie++
Дата
Msg-id CAB9893hf1k59QLyDjWe+3Q1MQS7kfOJuJZu4EkpL+K3L0_7PAg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: driving postgres to achieve benchmark results similar to bonnie++  (John Scalia <jayknowsunix@gmail.com>)
Ответы Re: driving postgres to achieve benchmark results similar to bonnie++
Список pgsql-admin
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@gmail.com> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay

В списке pgsql-admin по дате отправления:

Предыдущее
От: John Scalia
Дата:
Сообщение: Re: driving postgres to achieve benchmark results similar to bonnie++
Следующее
От: Mike Broers
Дата:
Сообщение: Re: driving postgres to achieve benchmark results similar to bonnie++