Обсуждение: Maximum amount of pg_locks

Поиск
Список
Период
Сортировка

Maximum amount of pg_locks

От
Tim Herren
Дата:
Hi

I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather influenced by the setting "max_locks_per_transaction" and the "max_connections".
I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at 512 and 180 respectively gives 92160.

Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".

postgres=# SELECT
    locktype,
    mode,
    count(*)
FROM pg_locks
GROUP BY 1, 2
ORDER BY 3 DESC;
  locktype  |      mode       | count  
------------+-----------------+--------
 relation   | AccessShareLock | 119948
 virtualxid | ExclusiveLock   |      2
(2 rows)

The database I'm backing up at that time contains 242 schemas and each has around 500 relations (a mix of tables and sequences).
So that matches pretty well.

I've verified that my configured setting is enough in the sense that I get a usable backup and my database remains operational during the backup period. Non the less I would like to understand where that difference comes from and how close to the "actual limit" I am.
Please let me know if this needs more information.

Thanks

Re: Maximum amount of pg_locks

От
Beniamin Hendre
Дата:
Actualisation script maximo plus??

mie., 28 ian. 2026, 22:12 Tim Herren <tim.herren@protonmail.ch> a scris:
Hi

I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather influenced by the setting "max_locks_per_transaction" and the "max_connections".
I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at 512 and 180 respectively gives 92160.

Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".

postgres=# SELECT
    locktype,
    mode,
    count(*)
FROM pg_locks
GROUP BY 1, 2
ORDER BY 3 DESC;
  locktype  |      mode       | count  
------------+-----------------+--------
 relation   | AccessShareLock | 119948
 virtualxid | ExclusiveLock   |      2
(2 rows)

The database I'm backing up at that time contains 242 schemas and each has around 500 relations (a mix of tables and sequences).
So that matches pretty well.

I've verified that my configured setting is enough in the sense that I get a usable backup and my database remains operational during the backup period. Non the less I would like to understand where that difference comes from and how close to the "actual limit" I am.
Please let me know if this needs more information.

Thanks

Re: Maximum amount of pg_locks

От
Tom Lane
Дата:
Tim Herren <tim.herren@protonmail.ch> writes:
> I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
> I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather
influencedby the setting "max_locks_per_transaction" and the "max_connections". 
> I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at
512and 180 respectively gives 92160. 
> Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".

Well, the hash table size is indeed set by a calculation of that form,
but it's max_locks_per_transaction times the number of potential
locker processes --- not only regular sessions (max_connections),
but also autovacuum processes and other background workers.  And
prepared transactions, too.  lock.c has

#define NLOCKENTS() \
    mul_size(max_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))

where postinit.c calculates MaxBackends as:

    /* Note that this does not include "auxiliary" processes */
    MaxBackends = MaxConnections + autovacuum_worker_slots +
        max_worker_processes + max_wal_senders + NUM_SPECIAL_WORKER_PROCS;

Then on top of that, there's a pretty considerable fudge factor:
lock.c scales up its shared-memory size request by 10%, and there
is (from memory) about 100K slop space added on top of the total of
shared-memory size requests from all modules, and all of that space
is potentially available for the lock table to overflow into.  (Other
shared data structures could claim it too, but I think the lock table
is the only one that we don't have a hard upper bound for.)

If you really want to know how many locks can be taken in your
particular setup, I'd counsel experimenting by deliberately exhausting
the lock space.  (Maybe not during production hours.)  The precise
limit will vary across PG versions, and potentially depend on other
factors like what extensions you have loaded.

            regards, tom lane



Re: Maximum amount of pg_locks

От
Ron Johnson
Дата:
On Wed, Jan 28, 2026 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Tim Herren <tim.herren@protonmail.ch> writes:
> I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
> I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather influenced by the setting "max_locks_per_transaction" and the "max_connections".
> I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at 512 and 180 respectively gives 92160.
> Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".

Well, the hash table size is indeed set by a calculation of that form,
but it's max_locks_per_transaction times the number of potential
locker processes --- not only regular sessions (max_connections),
but also autovacuum processes and other background workers.  And
prepared transactions, too.  lock.c has

#define NLOCKENTS() \
        mul_size(max_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))

where postinit.c calculates MaxBackends as:

        /* Note that this does not include "auxiliary" processes */
        MaxBackends = MaxConnections + autovacuum_worker_slots +
                max_worker_processes + max_wal_senders + NUM_SPECIAL_WORKER_PROCS;

Then on top of that, there's a pretty considerable fudge factor:
lock.c scales up its shared-memory size request by 10%, and there
is (from memory) about 100K slop space added on top of the total of
shared-memory size requests from all modules, and all of that space
is potentially available for the lock table to overflow into.  (Other
shared data structures could claim it too, but I think the lock table
is the only one that we don't have a hard upper bound for.)

If you really want to know how many locks can be taken in your
particular setup, I'd counsel experimenting by deliberately exhausting
the lock space.  (Maybe not during production hours.)  The precise
limit will vary across PG versions, and potentially depend on other
factors like what extensions you have loaded.

I asked a similar question a few years ago, and you said that on modern systems it's no problem to bump max_locks_per_transaction pretty darned high (like 1 million).   "The default, 64, has historically proven sufficient" had me slowly incrementing, but after your comment I added a zero to my existing max_locks_per_transaction value (1536, one and a half KiB) and haven't thought about it since.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Re: Maximum amount of pg_locks

От
Tim Herren
Дата:
On Wednesday, January 28th, 2026 at 10:51 PM, Ron Johnson <ronljohnsonjr@gmail.com> wrote:
On Wed, Jan 28, 2026 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Tim Herren <tim.herren@protonmail.ch> writes:
> I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
> I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather influenced by the setting "max_locks_per_transaction" and the "max_connections".
> I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at 512 and 180 respectively gives 92160.
> Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".

Well, the hash table size is indeed set by a calculation of that form,
but it's max_locks_per_transaction times the number of potential
locker processes --- not only regular sessions (max_connections),
but also autovacuum processes and other background workers. And
prepared transactions, too. lock.c has

#define NLOCKENTS() \
mul_size(max_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))

where postinit.c calculates MaxBackends as:

/* Note that this does not include "auxiliary" processes */
MaxBackends = MaxConnections + autovacuum_worker_slots +
max_worker_processes + max_wal_senders + NUM_SPECIAL_WORKER_PROCS;

Then on top of that, there's a pretty considerable fudge factor:
lock.c scales up its shared-memory size request by 10%, and there
is (from memory) about 100K slop space added on top of the total of
shared-memory size requests from all modules, and all of that space
is potentially available for the lock table to overflow into. (Other
shared data structures could claim it too, but I think the lock table
is the only one that we don't have a hard upper bound for.)

If you really want to know how many locks can be taken in your
particular setup, I'd counsel experimenting by deliberately exhausting
the lock space. (Maybe not during production hours.) The precise
limit will vary across PG versions, and potentially depend on other
factors like what extensions you have loaded.

I asked a similar question a few years ago, and you said that on modern systems it's no problem to bump max_locks_per_transaction pretty darned high (like 1 million). "The default, 64, has historically proven sufficient" had me slowly incrementing, but after your comment I added a zero to my existing max_locks_per_transaction value (1536, one and a half KiB) and haven't thought about it since.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Thank your for your explanation and the information from an actual production server.
I'll do some more reading and then adjust my monitoring.

Thanks again for the valuable input, have a nice day.
Tim