Обсуждение: PSQL performance - TPS
Hello,
We are working on development of an application with postgresql 9.6 as backend. Application as a whole is expected to give an throughput of 100k transactions per sec. The transactions are received by DB from component firing DMLs in ad-hoc fashion i.e. the commits are fired after random numbers of transaction like 2,3,4. There is no bulk loading of records. DB should have HA setup in active passive streaming replication. We are doing a test setup on a 8-core machine having 16 GB RAM. Actual HW will be better.
Need help in:
1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have tested with a simple Java code firing insert and commit in a loop on a simple table with one column. We get 1200 rows per sec. If we increase threads RPS decrease.
2. We have tuned some DB params like shared_buffers, sync_commit off, are there any other pointers to tune DB params?
Thanks.
On 01/08/2019 15:10, Shital A wrote: > Hello, > > We are working on development of an application with postgresql 9.6 as > backend. Application as a whole is expected to give an throughput of > 100k transactions per sec. The transactions are received by DB from > component firing DMLs in ad-hoc fashion i.e. the commits are fired > after random numbers of transaction like 2,3,4. There is no bulk > loading of records. DB should have HA setup in active passive > streaming replication. We are doing a test setup on a 8-core machine > having 16 GB RAM. Actual HW will be better. > > Need help in: > 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We > have tested with a simple Java code firing insert and commit in a loop > on a simple table with one column. We get 1200 rows per sec. If we > increase threads RPS decrease. > > 2. We have tuned some DB params like shared_buffers, sync_commit off, > are there any other pointers to tune DB params? > > > Thanks. Curious, why not use a more up-to-date version of Postgres, such 11.4? As more recent versions tend to run faster and to be better optimised! You also need to specify the operating system! Hopefully you are running a Linux or Unix O/S! Cheers, Gavin
Hello,
Version 9.6 is used because the components interacting with DB support this version. OS is RHEL 7.6.
Thanks!
On Thu, 1 Aug 2019, 10:45 Gavin Flower, <GavinFlower@archidevsys.co.nz> wrote:
On 01/08/2019 15:10, Shital A wrote:
> Hello,
>
> We are working on development of an application with postgresql 9.6 as
> backend. Application as a whole is expected to give an throughput of
> 100k transactions per sec. The transactions are received by DB from
> component firing DMLs in ad-hoc fashion i.e. the commits are fired
> after random numbers of transaction like 2,3,4. There is no bulk
> loading of records. DB should have HA setup in active passive
> streaming replication. We are doing a test setup on a 8-core machine
> having 16 GB RAM. Actual HW will be better.
>
> Need help in:
> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We
> have tested with a simple Java code firing insert and commit in a loop
> on a simple table with one column. We get 1200 rows per sec. If we
> increase threads RPS decrease.
>
> 2. We have tuned some DB params like shared_buffers, sync_commit off,
> are there any other pointers to tune DB params?
>
>
> Thanks.
Curious, why not use a more up-to-date version of Postgres, such 11.4?
As more recent versions tend to run faster and to be better optimised!
You also need to specify the operating system! Hopefully you are
running a Linux or Unix O/S!
Cheers,
Gavin
Hi, On 2019-08-01 08:40:53 +0530, Shital A wrote: > Need help in: > 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have > tested with a simple Java code firing insert and commit in a loop on a > simple table with one column. We get 1200 rows per sec. If we increase > threads RPS decrease. > > 2. We have tuned some DB params like shared_buffers, sync_commit off, are > there any other pointers to tune DB params? If you've set synchronous_commit = off, and you still get only 1200 transactions/sec, something else is off. Are you sure you set that? Are your clients in the same datacenter as your database? Otherwise it could be that you're mostly seeing latency effects. Greetings, Andres Freund
I am not very surprised with these results. However, what’s the disk type? That can matter quite a bit.
On Thu, 1 Aug 2019 at 10:51 PM, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-08-01 08:40:53 +0530, Shital A wrote:
> Need help in:
> 1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have
> tested with a simple Java code firing insert and commit in a loop on a
> simple table with one column. We get 1200 rows per sec. If we increase
> threads RPS decrease.
>
> 2. We have tuned some DB params like shared_buffers, sync_commit off, are
> there any other pointers to tune DB params?
If you've set synchronous_commit = off, and you still get only 1200
transactions/sec, something else is off. Are you sure you set that?
Are your clients in the same datacenter as your database? Otherwise it
could be that you're mostly seeing latency effects.
Greetings,
Andres Freund
Hi, On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote: > > If you've set synchronous_commit = off, and you still get only 1200 > > transactions/sec, something else is off. Are you sure you set that? > I am not very surprised with these results. However, what’s the disk type? > That can matter quite a bit. Why aren't you surprised? I can easily get 20k+ write transactions/sec on my laptop, with synchronous_commit=off. With appropriate shared_buffers and other settings, the disk speed shouldn't matter that much for in insertion mostly workload. Greetings, Andres Freund
On Thu, Aug 1, 2019 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:
> > If you've set synchronous_commit = off, and you still get only 1200
> > transactions/sec, something else is off. Are you sure you set that?
> I am not very surprised with these results. However, what’s the disk type?
> That can matter quite a bit.
Also a reminder that you should have a connection pooler in front of your database such as PGBouncer. If you are churning a lot of connections you could be hurting your throughput.
On Thu, 1 Aug 2019, 23:58 Rick Otten, <rottenwindfish@gmail.com> wrote:
On Thu, Aug 1, 2019 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:Hi,
On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:
> > If you've set synchronous_commit = off, and you still get only 1200
> > transactions/sec, something else is off. Are you sure you set that?
> I am not very surprised with these results. However, what’s the disk type?
> That can matter quite a bit.Also a reminder that you should have a connection pooler in front of your database such as PGBouncer. If you are churning a lot of connections you could be hurting your throughput.
Hello,
Yes, synchronous_commit is off on primary and standby.
Primary, standby and clients are in same datacentre.
Shared_buffers set to 25% of RAM , no much improvement if this is increased.
Other params set are:
Effective_cache_size 12GB
Maintainance_work_mem 1GB
Walk_buffers 16MB
Effective_io_concurrency 200
Work_mem 5242kB
Min_wal_size 2GB
Max_wal_size 4GB
Max_worker_processes 8
Max_parallel_workers_per_gather 8
Checkpoint_completion_target 0.9
Random_page_cost 1.1
We have not configured connection pooler. Number of coonections are under 20 for this testing.
@Rick, 20k TPS on your system - is it with batching
Want to know what configuration we are missing to achieve higher TPS. We are testing inserts on a simple table with just one text column.
Thanks !
> Application as a whole is expected to give an throughput of 100k transactions per sec.
> On this env(8core cpu, 16GB) what is the TPS that we can expect?
The new TechEmpower Framework Benchmarks [2019-07-09 Round 18]
* reference numbers: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update
> On this env(8core cpu, 16GB) what is the TPS that we can expect?
as a reference - maybe you can reuse/adapt the "TechEmpower Framework Benchmarks" tests - and compare your PG9.6+hardware results.
The new TechEmpower Framework Benchmarks [2019-07-09 Round 18]
* reference numbers: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update
* source code: https://github.com/TechEmpower/FrameworkBenchmarks
* PG11 config: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/toolset/databases/postgres/postgresql.conf
* java frameworks: https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java
> We have tested with a simple Java code firing insert
As I see - There are lot of java framework - and sometimes 10x difference in performance :
https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update
"Responses per second at 20 updates per request, Dell R440 Xeon Gold + 10 GbE"
( "Intel Xeon Gold 5120 CPU (14c28t) , 32 GB of memory, and an enterprise SSD. Dedicated Cisco 10-gigabit Ethernet switch")
* PG11 config: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/toolset/databases/postgres/postgresql.conf
* java frameworks: https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java
> We have tested with a simple Java code firing insert
As I see - There are lot of java framework - and sometimes 10x difference in performance :
https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=update
"Responses per second at 20 updates per request, Dell R440 Xeon Gold + 10 GbE"
( "Intel Xeon Gold 5120 CPU (14c28t) , 32 GB of memory, and an enterprise SSD. Dedicated Cisco 10-gigabit Ethernet switch")
* java + PG11 results: low:126 -> high:21807
"Responses per second at 20 updates per request, Azure D3v2 instances"
* java + PG11 results: low:329 -> high:2975
* java + PG11 results: low:329 -> high:2975
best,
Imre
Shital A <brightuser2019@gmail.com> ezt írta (időpont: 2019. aug. 1., Cs, 5:11):
Hello,We are working on development of an application with postgresql 9.6 as backend. Application as a whole is expected to give an throughput of 100k transactions per sec. The transactions are received by DB from component firing DMLs in ad-hoc fashion i.e. the commits are fired after random numbers of transaction like 2,3,4. There is no bulk loading of records. DB should have HA setup in active passive streaming replication. We are doing a test setup on a 8-core machine having 16 GB RAM. Actual HW will be better.Need help in:1. On this env(8core cpu, 16GB) what is the TPS that we can expect? We have tested with a simple Java code firing insert and commit in a loop on a simple table with one column. We get 1200 rows per sec. If we increase threads RPS decrease.2. We have tuned some DB params like shared_buffers, sync_commit off, are there any other pointers to tune DB params?Thanks.