Обсуждение: to many locks held

Поиск
Список
Период
Сортировка

to many locks held

От
Jeison Bedoya
Дата:
Hi everybody,

In recent days, we have seen many processes in reaching the lock held
5000. At that time my machine will become sluggish and no response from
the database. I tried to change configuration parameters, but have not
found anything satisfactory. further in meeting log messages like the
following:
COTidleERROR: out of memory
COTidleDETAIL: Can not enlarge string buffer container containing 0
bytes by 1476395004 more bytes.
COTidleLOG: incomplete message from client
COTUPDATE waitingLOG: process 20761 still waiting for ShareLock on
transaction 10,580,510 1664,674 ms after

My machine is on linux postgres version 9.2.2, and the following settings:

memory ram: 128 GB
cores: 32

max_connections: 900
shared_buffers = 2048MB
work_mem = 1024MB
maintenance_work_mem = 1024MB
temp_buffers =  512MB
checkpoint_segments = 103

regrads



Re: to many locks held

От
bricklen
Дата:

On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:
Hi everybody,

In recent days, we have seen many processes in reaching the lock held 5000.


Do you know what queries are holding locks? Is that behaviour expected?

 
At that time my machine will become sluggish and no response from the database. I tried to change configuration parameters, but have not found anything satisfactory. further in meeting log messages like the following:
COTidleERROR: out of memory
COTidleDETAIL: Can not enlarge string buffer container containing 0 bytes by 1476395004 more bytes.

I've never come across that message before, so someone wiser will need to comment on that.
 
COTidleLOG: incomplete message from client
COTUPDATE waitingLOG: process 20761 still waiting for ShareLock on transaction 10,580,510 1664,674 ms after

My machine is on linux postgres version 9.2.2, and the following settings:

You will want to upgrade to the latest point release (9.2.4) as there was a serious security vulnerability fixed in 9.2.3. Details: http://www.postgresql.org/about/news/1446/
 

memory ram: 128 GB
cores: 32

max_connections: 900

I would say you might be better off using a connection pooler if you need this many connections.


work_mem = 1024MB

work_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like "External merge Disk" in your EXPLAIN ANALYZE plans)?

Have you looked at swapping and disk I/O during these periods of sluggishness?


Re: to many locks held

От
Michael Paquier
Дата:



On Tue, Jul 30, 2013 at 11:48 PM, bricklen <bricklen@gmail.com> wrote:
On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:
memory ram: 128 GB
cores: 32

max_connections: 900

I would say you might be better off using a connection pooler if you need this many connections.
Yeah that's a lot. pgbouncer might be a good option in your case.

work_mem = 1024MB

work_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like "External merge Disk" in your EXPLAIN ANALYZE plans)?
work_mem is a per-operation setting for sort/hash operations. So in your case you might finish with a maximum of 900GB of memory allocated based on the maximum number of sessions that can run in parallel on your server. Simply reduce the value of work_mem to something your server can manage and you should be able to solve your problems of OOM.
--
Michael

Re: to many locks held

От
Kevin Grittner
Дата:
Jeison Bedoya <jeisonb@audifarma.com.co> wrote:

> memory ram: 128 GB
> cores: 32
>
> max_connections: 900

> temp_buffers =  512MB

In addition to the other comments, be aware that temp_buffers is
the limit of how much RAM *each connection* can acquire to avoid
writing temporary table data to disk.  Once allocated to a
connection, it will be reserved for that use on that connection
until the connection closes.  So temp_buffers could lock down 450
GB of RAM even while all connections are idle.  If the maximum
connections become active, and they average one work_mem allocation
apiece, that's an *additional* 900 GB of RAM which would be needed
to avoid problems.

Reducing connections through a pooler is strongly indicated, and
you may still need to reduce work_mem or temp_buffers.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company