Re: Understanding max_locks_per_transaction

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Understanding max_locks_per_transaction
Дата
Msg-id 2438977.1697481141@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Understanding max_locks_per_transaction  (Craig McIlwee <craigm@vt.edu>)
Ответы Re: Understanding max_locks_per_transaction
Список pgsql-general
Craig McIlwee <craigm@vt.edu> writes:
> Most discussions regarding the lock table say that the size of the lock
> table determines how many locks can be held.  The documentation for
> max_locks_per_transaction [3] reads slightly different though, and in
> particular this phrases stands out to me:

>> no more than this many distinct objects can be locked at any one time

> To me, that seems to be saying that multiple locks for the same object
> (e.g. for a single table) would only consume a single lock table entry.
> Finally on to my first question: Am I interpreting the documentation
> correctly, that multiple locks for the same object only consume a single
> lock table entry,

Yes ... however it's a good deal more complicated than that.

What actually happens under the hood is that we allocate enough shared
memory space for (MaxBackends + max_prepared_transactions) *
max_locks_per_transaction LOCK structs (which are the per-locked-object
entries) and twice that many PROCLOCK structs, which are
per-lock-per-holder information.  The 2X multiplier assumes that on
average about two sessions will be holding/requesting locks on any
specific locked object.

Now, MaxBackends is more than max_connections, because it also
accounts for autovacuum workers, parallel workers, etc.  So that's
one of the sources of the fuzzy limit you noticed.  The other source
is that we allocate about 100K more shared memory space than we think
we need, and it's possible for the lock tables to expand into that
"slop" space.  I've not checked the sizes of these structs lately,
but the slop space could surely accommodate several hundred more
locks than the initial estimate allows.

Certainly it's safe to raise max_prepared_transactions a good deal
on modern machines, but I'm not sure that you can reasonably get
to a place where there is a mathematical guarantee that you won't
run out of shared memory.  Even if you know how many lockable
objects your installation has (which I bet you don't, or at least
the number isn't likely to hold still for long) it's pretty hard
to say exactly how many PROCLOCK entries you might need.  And
bloating the lock table size by max_connections/2 or so to try
to brute-force that doesn't seem like a good plan.

I'd just raise max_prepared_transactions until you stop seeing
problems, and then maybe add a factor of two safety margin.

            regards, tom lane



В списке pgsql-general по дате отправления:

Предыдущее
От: Craig McIlwee
Дата:
Сообщение: Understanding max_locks_per_transaction
Следующее
От: Craig McIlwee
Дата:
Сообщение: Re: Understanding max_locks_per_transaction