Re: to many locks held
| От | Michael Paquier | 
|---|---|
| Тема | Re: to many locks held | 
| Дата | |
| Msg-id | CAB7nPqQaZ9TaMn6cSMc9GwCWDNcBS=pv=6=r715+vN9YPNO+xw@mail.gmail.com обсуждение исходный текст  | 
		
| Ответ на | Re: to many locks held (bricklen <bricklen@gmail.com>) | 
| Список | pgsql-performance | 
On Tue, Jul 30, 2013 at 11:48 PM, bricklen <bricklen@gmail.com> wrote:
MichaelOn Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:
memory ram: 128 GB
cores: 32
max_connections: 900I would say you might be better off using a connection pooler if you need this many connections.
Yeah that's a lot. pgbouncer might be a good option in your case. 
work_mem = 1024MBwork_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like "External merge Disk" in your EXPLAIN ANALYZE plans)?
work_mem is a per-operation setting for sort/hash operations. So in your case you might finish with a maximum of 900GB of memory allocated based on the maximum number of sessions that can run in parallel on your server. Simply reduce the value of work_mem to something your server can manage and you should be able to solve your problems of OOM.
--
--
В списке pgsql-performance по дате отправления: