Re: Improving connection scalability (src/backend/storage/ipc/procarray.c)
От | Tomas Vondra |
---|---|
Тема | Re: Improving connection scalability (src/backend/storage/ipc/procarray.c) |
Дата | |
Msg-id | 623799bd-32d9-1e7f-79de-579d1e0b84df@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Improving connection scalability (src/backend/storage/ipc/procarray.c) (Ranier Vilela <ranier.vf@gmail.com>) |
Ответы |
Re: Improving connection scalability (src/backend/storage/ipc/procarray.c)
|
Список | pgsql-hackers |
On 5/25/22 11:07, Ranier Vilela wrote: > Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de > <mailto:andres@anarazel.de>> escreveu: > > Hi Andres, thank you for taking a look. > > > On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote: > > Linux Ubuntu 64 bits (gcc 9.4) > > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres > > > > conns tps head tps patched > > 1 2918.004085 3190.810466 > > 10 12262.415696 17199.862401 > > 50 13656.724571 18278.194114 > > 80 14338.202348 17955.336101 > > 90 16597.510373 18269.660184 > > 100 17706.775793 18349.650150 > > 200 16877.067441 17881.250615 > > 300 16942.260775 17181.441752 > > 400 16794.514911 17124.533892 > > 500 16598.502151 17181.244953 > > 600 16717.935001 16961.130742 > > 700 16651.204834 16959.172005 > > 800 16467.546583 16834.591719 > > 900 16588.241149 16693.902459 > > 1000 16564.985265 16936.952195 > > 17-18k tps is pretty low for pgbench -S. For a shared_buffers > resident run, I > can get 40k in a single connection in an optimized build. If you're > testing a > workload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And > testing an assert build isn't a meaningful exercise either, unless > you have > way way higher gains (i.e. stuff like turning O(n^2) into O(n)). > > Thanks for sharing these hits. > Yes, their 17-18k tps are disappointing. > > > What pgbench scale is this and are you using an optimized build? > > Yes this optimized build. > CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith > -Wdeclaration-after-statement -Werror=vla -Wendif-labels > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type > -Wformat-security -fno-strict-aliasing -fwrapv > -fexcess-precision=standard -Wno-format-truncation > -Wno-stringop-truncation -O2' > from config.log > That can still be assert-enabled build. We need to see configure flags. > pgbench was initialized with: > pgbench -i -p 5432 -d postgres > > pgbench -M prepared -c 100 -j 100 -S -n -U postgres You're not specifying duration/number of transactions to execute. So it's using just 10 transactions per client, which is bound to give you bogus results due to not having anything in relcache etc. Use -T 60 or something like that. > pgbench (15beta1) > transaction type: <builtin: select only> > scaling factor: 1 > query mode: prepared > number of clients: 100 > number of threads: 100 > > The shared_buffers is default: > shared_buffers = 128MB > > Intel® Core™ i5-8250U CPU Quad Core > RAM 8GB > SSD 256 GB > Well, quick results on my laptop (i7-9750H, so not that different from what you have): 1 = 18908.080126 2 = 32943.953182 3 = 42316.079028 4 = 46700.087645 So something is likely wrong in your setup. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления:
Предыдущее
От: Tomas VondraДата:
Сообщение: Re: [RFC] Improving multi-column filter cardinality estimation using MCVs and HyperLogLog