Re: Out of Shared Memory: max_locks_per_transaction

Поиск
Список
Период
Сортировка
От Eliot Gable
Тема Re: Out of Shared Memory: max_locks_per_transaction
Дата
Msg-id CAD-6L_WdFxsN7NYddqkwvTnYDfxLq1CSY4JKMQPrWYMxu8idvQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Out of Shared Memory: max_locks_per_transaction  (Merlin Moncure <mmoncure@gmail.com>)
Ответы Re: Out of Shared Memory: max_locks_per_transaction  (Eliot Gable <egable+pgsql-general@gmail.com>)
Список pgsql-general

most likely possibility you have a transaction being left open and
accumulating locks.   of course, you have to rule out the fact that
you simply have to increase max_locks_per_transaction: if you have a
lot of tables, it might be reasonable to have to extend this on a
stock config.


We allow 500 connections to the DB with 64 locks per transaction = 32,000 locks. During the stress testing, I had roughly 40 simultaneous operations going through the test application. The test application uses a separate set of threads for accessing the database along with a shared connection pool and a FIFO queue attached to each connection. Queries are mostly insert, update, and delete, so they are batched into transactions in blocks of 100 - 1000 queries per transaction. At the start of the transaction, a stored procedure is called which acquires locks on 8 tables in a specific order to prevent triggers on the associated tables from deadlocking with each other and with other things accessing the database. In total, there might be 15 tables (at most) touched by the batch of queries. 

Another process comes along and processes records which are being inserted into the database. It pulls up to 10 records from a table, processes them, and moves those records into a "processed" table. The processing of the records is rather complex. To facilitate the processing, 6 temporary tables are created during the processing of each record, and then dropped after that record is completed. 8 additional tables are accessed in some way during the processing of each record. Each call to the processing stored procedure is run in its own transaction and handles only those 10 records at a time. This is done to keep the length of the transaction short so it does not block other activity in the database.
 
one thing that can cause this unfortunately is advisory locks eating
up exactly the amount of shared memory you have.  that's another thing
to rule out.

How would I rule this out?
 

> How do I track down what is going on if I cannot even run psql to get into
> the DB to run troubleshooting queries?

it's a nasty problem.  if shared memory is exhausted and stuck i think
the only practical think to do is to restart the database or nuking
attached clients. maybe try restarting the test, but keep an open
session *with an open transaction* that has previously queried both
pg_locks and pg_stat_activity.  it's just a hunch, but perhaps this
might allow you to query said views and gather some details.


That is an interesting suggestion. I will definitely give that a try.

Is the pg_locks table the table I would query to see what is eating up those 32,000 locks? Is there some other table or query I could run which might provide additional information about those 32,000 locks and who / what is using them?

Thanks for the info and your suggestions! 

В списке pgsql-general по дате отправления:

Предыдущее
От: Merlin Moncure
Дата:
Сообщение: Re: Out of Shared Memory: max_locks_per_transaction
Следующее
От: "George Weaver"
Дата:
Сообщение: View using dblink fails if not able to make connection