Re: [HACKERS] LWLock optimization for multicore Power machines

Поиск
Список
Период
Сортировка
От Alexander Korotkov
Тема Re: [HACKERS] LWLock optimization for multicore Power machines
Дата
Msg-id CAPpHfdtouGMnTjh5YKwYxFm2O=u8x+Spn1_H1jd+m6G-ZvPqrw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] LWLock optimization for multicore Power machines  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Ответы Re: [HACKERS] LWLock optimization for multicore Power machines  (Bernd Helmle <mailings@oopsware.de>)
Re: [HACKERS] LWLock optimization for multicore Power machines  (Bernd Helmle <mailings@oopsware.de>)
Список pgsql-hackers
On Mon, Feb 13, 2017 at 10:17 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
On 02/13/2017 03:16 PM, Bernd Helmle wrote:
Am Samstag, den 11.02.2017, 15:42 +0300 schrieb Alexander Korotkov:
Thus, I see reasons why in your tests absolute results are lower than
in my
previous tests.
1.  You use 28 physical cores while I was using 32 physical cores.
2.  You run tests in PowerVM while I was running test on bare metal.
PowerVM could have some overhead.
3.  I guess you run pgbench on the same machine.  While in my tests
pgbench
was running on another node of IBM E880.


Yeah, pgbench was running locally. Maybe we can get some resources to
run them remotely. Interesting side note: If you run a second postgres
instance with the same pgbench in parallel, you get nearly the same
transaction throughput as a single instance.

Short side note:

If you run two Postgres instances concurrently with the same pgbench
parameters, you get nearly the same transaction throughput for both
instances each as when running against a single instance, e.g.


That strongly suggests you're hitting some kind of lock. It'd be good to know which one. I see you're doing "pgbench -S" which also updates branches and other tiny tables - it's possible the sessions are trying to update the same row in those tiny tables. You're running with scale 1000, but with 100 it's still possible thanks to the birthday paradox.

Otherwise it might be interesting to look at sampling wait events, which might tell us more about the locks.

+1
And you could try to use pg_wait_sampling to sampling of wait events.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: [HACKERS] pg_waldump's inclusion of backend headers is a mess
Следующее
От: Robert Haas
Дата:
Сообщение: Re: [HACKERS] pg_waldump's inclusion of backend headers is a mess