Maximum number of exclusive locks
От | Daniel Verite |
---|---|
Тема | Maximum number of exclusive locks |
Дата | |
Msg-id | 07d8bc2e-a501-4f5e-baf8-a7e99e8f4b48@mm обсуждение исходный текст |
Ответы |
Re: Maximum number of exclusive locks
|
Список | pgsql-general |
Hi, When deleting large objects, an exclusive lock is grabbed on each object individually. As a result, a transaction that does it en masse can encounter this error: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. I would expect the maximum number of lo_unlink() in the same transaction to be capped at: max_locks_per_transaction * (max_connections + max_prepared_transactions) per documentation: https://www.postgresql.org/docs/current/static/runtime-config-locks.html "The shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time" But in practice, on an otherwise unused 9.5 instance, I've noticed that this query: select count(lo_unlink(oid)) from (select oid from pg_largeobject_metadata limit :LIMIT) s; with these settings: max_locks_per_transaction | 512 max_connections | 30 max_prepared_transactions | 5 starts failing at LIMIT=37133, although I'd expect this to happen, in the best case, at LIMIT=512*(30+5)=17920. Nothing to complain about, but why would the above formula underestimate the number of object locks actually available to a transaction? Isn't it supposed to be a hard cap for such locks? Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite
В списке pgsql-general по дате отправления: