Could not launch new process for connection: Could not allocate memory

Поиск
Список
Период
Сортировка
От Lisandro Rostagno
Тема Could not launch new process for connection: Could not allocate memory
Дата
Msg-id CAErDPsVRDA84xpZXX=Qx5p4cOYau3RdgUOA_UgF9Mu72Oacx6A@mail.gmail.com
обсуждение исходный текст
Список pgsql-general
I need some help to understand the cause of this error and how to fix it.

I have a server running PostgreSQL 9.3
The cluster has around 1.000 databases
I'm running pgBouncer for pooling connections, and I set up a pool_size of only 1 connection per database.
The total amount of active connections at any time is ~80. This is because most of databases have almost no activity at all.

These are some of my PostgreSQL settings:

max_connections = 300
shared_buffers = 2GB
effective_cache_size = 2GB
maintenance_work_mem = 1GB
work_mem = 288MB
wal_buffers = 8MB
checkpoint_segments = 16


The server has 64GB of total RAM, 16 CPU cores, and it is running CentOS 7.
My stack is: Nginx, uWSGI, Redis, pgBouncer and PostgreSQL.
It's all installed in the same server, so resources must be shared between the elements of the stack.

Redis is set to use no more than 30GB of RAM.
Nginx ~8GB of RAM.
uWSGI uses ~10GB of RAM.
PostgreSQL uses ~8GB of RAM.


I'm no an expert at PostgreSQL. I've been reading the documentation regarding memory consumption, and considering my settings, my best guess is that I need to set higher values for shared_buffers and effective_cache_size. I got to that conclusion because those settings have been there for a long time (when my server had much less resources). But in the last few years, I've been adding resources to my server, but at the same time it has become busier. So I think maybe shared_buffers should be at least of 8gb, what do you think about that?

Also, for what I've read, I should make effective_cache_size higher than shared_buffers (considering the amount of RAM the server has available). Am I right?

One more thing: what do you think about my work_mem setting? I'm planning to reduce it (I'm not sure why it is set to 288MB). I know that the vast majority of the queries are very simple and fast. This is because all the databases correspond to simple blog applications, where most of the queries are SELECTs and the UPDATEs involve changing some value at an specific row, nothing complex. So, what do you think? I plan to reduce work_mem, am I in the right direction?

Thank you very much in advance!
Warm regards,
Lisandro.

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: pg_restore V12 fails consistently against piped pg_dumps
Следующее
От: Jasen Lentz
Дата:
Сообщение: RE: pg_basebackup inconsistent performance