Обсуждение: Re: driving postgres to achieve benchmark results similar to bonnie++

Поиск
Список
Период
Сортировка

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Scott Whitney
Дата:
What is your connection to your SAN?



Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone


-------- Original message --------
From: Mike Broers <mbroers@gmail.com>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@gmail.com>
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] driving postgres to achieve benchmark results similar to bonnie++

I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?





On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@gmail.com> wrote:
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@gmail.com> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay




Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

p 512.834.8888 
f 512-834-8858 

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ 

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Mike Broers
Дата:
We have a 10Gb storage network.

bonnie++ with fsync option shows ~ 700MB/sec write, 300MB/sec read, writing a file twice as big as ram.

I am unable to push postgres to get higher than 115MB/sec writes or reads on those same mounts I am verifying are performing well with bonnie++.  This is probably either because my tests arent intense enough or I have misconfigured postgres.  If someone has an idea for how to set up pgbench to really push the io to prove out ssd potential, or a different script/approach that would be awesome.  



On Tue, May 10, 2016 at 1:44 PM, Scott Whitney <scott@journyx.com> wrote:
What is your connection to your SAN?



Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone


-------- Original message --------
From: Mike Broers <mbroers@gmail.com>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@gmail.com>
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] driving postgres to achieve benchmark results similar to bonnie++

I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?





On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@gmail.com> wrote:
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@gmail.com> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay




Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ 

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Wei Shan
Дата:
are you running the pgbench on the same server as the postgres database. Can you check if there's a saturation at the CPU?

On 11 May 2016 at 03:02, Mike Broers <mbroers@gmail.com> wrote:
We have a 10Gb storage network.

bonnie++ with fsync option shows ~ 700MB/sec write, 300MB/sec read, writing a file twice as big as ram.

I am unable to push postgres to get higher than 115MB/sec writes or reads on those same mounts I am verifying are performing well with bonnie++.  This is probably either because my tests arent intense enough or I have misconfigured postgres.  If someone has an idea for how to set up pgbench to really push the io to prove out ssd potential, or a different script/approach that would be awesome.  



On Tue, May 10, 2016 at 1:44 PM, Scott Whitney <scott@journyx.com> wrote:
What is your connection to your SAN?



Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone


-------- Original message --------
From: Mike Broers <mbroers@gmail.com>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@gmail.com>
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] driving postgres to achieve benchmark results similar to bonnie++

I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?





On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@gmail.com> wrote:
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@gmail.com> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay




Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ 




--
Regards,
Ang Wei Shan

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Mike Broers
Дата:
Yes pgbench is running locally.  The CPU typically goes to around 60-70% idle and the bulk of the remaining 10-30 is wait on io.   This is with 4 clients running 10k transactions.  

I've run some permutations of pgbench that pushed wait io higher to something like 80% lwith 4 threads running 16 clients and running on #seconds instead of #transactions, but no better MB/sec than previously explained.










On Tue, May 10, 2016 at 2:38 PM, Wei Shan <weishan.ang@gmail.com> wrote:
are you running the pgbench on the same server as the postgres database. Can you check if there's a saturation at the CPU?

On 11 May 2016 at 03:02, Mike Broers <mbroers@gmail.com> wrote:
We have a 10Gb storage network.

bonnie++ with fsync option shows ~ 700MB/sec write, 300MB/sec read, writing a file twice as big as ram.

I am unable to push postgres to get higher than 115MB/sec writes or reads on those same mounts I am verifying are performing well with bonnie++.  This is probably either because my tests arent intense enough or I have misconfigured postgres.  If someone has an idea for how to set up pgbench to really push the io to prove out ssd potential, or a different script/approach that would be awesome.  



On Tue, May 10, 2016 at 1:44 PM, Scott Whitney <scott@journyx.com> wrote:
What is your connection to your SAN?



Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone


-------- Original message --------
From: Mike Broers <mbroers@gmail.com>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@gmail.com>
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] driving postgres to achieve benchmark results similar to bonnie++

I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?





On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@gmail.com> wrote:
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@gmail.com> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@gmail.com> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay




Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ 




--
Regards,
Ang Wei Shan

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Kevin Grittner
Дата:
On Tue, May 10, 2016 at 2:02 PM, Mike Broers <mbroers@gmail.com> wrote:

> If someone has an idea for how to set up pgbench to
> really push the io to prove out ssd potential, or a different
> script/approach that would be awesome.

You might want to play with some large effective_io_concurrency
settings, starting around 256.

http://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: driving postgres to achieve benchmark results similar to bonnie++

От
Mike Broers
Дата:
I've set this parameter to 256 to test, still getting the same results.  The only way I reliably see a spike in IO at all is on the pgbench initialization (still peaking around 110MB/sec writes), so I'm thinking about playing around with that or duplicating its approach with larger rows and running it from multiple sessions in different initialization databases on the same volume.  If I cant push it further using that technique then there must be some dumb bottleneck in config I am overlooking.  



On Tue, May 10, 2016 at 3:24 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
On Tue, May 10, 2016 at 2:02 PM, Mike Broers <mbroers@gmail.com> wrote:

> If someone has an idea for how to set up pgbench to
> really push the io to prove out ssd potential, or a different
> script/approach that would be awesome.

You might want to play with some large effective_io_concurrency
settings, starting around 256.

http://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: driving postgres to achieve benchmark results similar to bonnie++

От
Mike Broers
Дата:
Ok so I ran 6 parallel pgbench initializations at a relatively high scale 2500 on 6 different pgbench databases on the same server and was able to push disk io to around 380MB/sec write with pretty low cpu wio (around 5-15%).  Progress!  As I hoped, I just haven't been pushing hard enough. 

On Wed, May 11, 2016 at 10:16 AM, Mike Broers <mbroers@gmail.com> wrote:
I've set this parameter to 256 to test, still getting the same results.  The only way I reliably see a spike in IO at all is on the pgbench initialization (still peaking around 110MB/sec writes), so I'm thinking about playing around with that or duplicating its approach with larger rows and running it from multiple sessions in different initialization databases on the same volume.  If I cant push it further using that technique then there must be some dumb bottleneck in config I am overlooking.  



On Tue, May 10, 2016 at 3:24 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
On Tue, May 10, 2016 at 2:02 PM, Mike Broers <mbroers@gmail.com> wrote:

> If someone has an idea for how to set up pgbench to
> really push the io to prove out ssd potential, or a different
> script/approach that would be awesome.

You might want to play with some large effective_io_concurrency
settings, starting around 256.

http://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company