Hi Andres,
I am extremely sorry for the delayed response. As suggested by you, I have taken the performance readings at 128 client counts after making the following two changes:
1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL); from pq_init(). Below is the git diff for the same.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 8d6eb0b..399d54b 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -206,7 +206,9 @@ pq_init(void)
AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
NULL, NULL);
AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
+#if 0
AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+#endif
2). Disabled the guc vars "bgwriter_flush_after", "checkpointer_flush_after" and "backend_flush_after" by setting them to zero.
After doing the above two changes below are the readings i got for 128 client counts:
CASE : Read-Write Tests when data exceeds shared buffers.
Non Default settings and test
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
./pgbench -i -s 1000 postgres
./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
Run1 : tps = 9690.678225
Run2 : tps = 9904.320645
Run3 : tps = 9943.547176
Please let me know if i need to take readings with other client counts as well.
Note: I have taken these readings on postgres master head at,
commit 91fd1df4aad2141859310564b498a3e28055ee28
Author: Tom Lane <
tgl@sss.pgh.pa.us>
Date: Sun May 8 16:53:55 2016 -0400