Обсуждение: ERROR: out of memory

Поиск
Список
Период
Сортировка

ERROR: out of memory

От
Dzmitry Nikitsin
Дата:
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS, 

Master configuration: 

default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300


Slave configuration: 


max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100


I am using XSF file system, size of my database – 168GB. 

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320

kernel.shmmax = 32212254720

kernel.shmmni = 4096


Master is primary to write data. Slave – for reporting. In reality I am using not more then 100 connections to slave server at the same time. Writing about 3000 records in a minute. 


I have one table where I writing statistics, that is portioned by month. Below is table size with biggest relations 

public.stats_201408               | 9212 MB

 public.stats_201503               | 8868 MB

 pg_toast.pg_toast_6404464             | 8319 MB

 pg_toast.pg_toast_317921              | 7520 MB

 public.stats_201409               | 7101 MB

 public.stats_201412               | 4458 MB


I see here pg_toast, from doc I read it’s large objects there, but It’s not related to my table stats, which is read/write heavy(type of the biggest column in this table => character varying(3000)). I.e. - it’s related to different table. 


My application create 100 connections & keeping them during whole life cycle(usually until next deploy – that may happen in couple days), with time – connection growing in memory(checking using htop) & free memory going down. As result with time(usually 3-4 hours) my DB start throwing 


ERROR:  out of memory

DETAIL:  Failed on request of size 2048.


After I restart my application(reconnect to DB), it start working fine again. 


It even fail on simple query like: 


 SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at >= '2015-04-02 04:00:00.000000') AND (stats.created_at <= '2015-04-03 03:59:59.999999') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'



Any help appreciated.





Re: ERROR: out of memory

От
"David G. Johnston"
Дата:


On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin <dzmitry.nikitsin@gmail.com> wrote:
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS, 

Master configuration: 

default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300


Slave configuration: 


max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100


I am using XSF file system, size of my database – 168GB. 

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320

kernel.shmmax = 32212254720

kernel.shmmni = 4096


Master is primary to write data. Slave – for reporting. In reality I am using not more then 100 connections to slave server at the same time. Writing about 3000 records in a minute. 


I have one table where I writing statistics, that is portioned by month. Below is table size with biggest relations 

public.stats_201408               | 9212 MB

 public.stats_201503               | 8868 MB

 pg_toast.pg_toast_6404464             | 8319 MB

 pg_toast.pg_toast_317921              | 7520 MB

 public.stats_201409               | 7101 MB

 public.stats_201412               | 4458 MB


I see here pg_toast, from doc I read it’s large objects there, but It’s not related to my table stats, which is read/write heavy(type of the biggest column in this table => character varying(3000)). I.e. - it’s related to different table. 


My application create 100 connections & keeping them during whole life cycle(usually until next deploy – that may happen in couple days), with time – connection growing in memory(checking using htop) & free memory going down. As result with time(usually 3-4 hours) my DB start throwing 


ERROR:  out of memory

DETAIL:  Failed on request of size 2048.


After I restart my application(reconnect to DB), it start working fine again. 


It even fail on simple query like: 


 SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at >= '2015-04-02 04:00:00.000000') AND (stats.created_at <= '2015-04-03 03:59:59.999999') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'



Any help appreciated.

​Start here:


Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and permanently connected to the database but many of them are also keeping open transactions.

​I'd also drop any preconception about which tables you think are problematic - since likely the tables themselves are not the issue.​

David J.​
 

Re: ERROR: out of memory

От
Melvin Davidson
Дата:
Well right of the bat, if your master shared_buffers = 7GB and 3 slaves shared_buffers = 10GB, that is 37GB total, which means you are guaranteed to exceed the 30GB physical limit on your machine. General recommendation is to only allocate 1/4 total memory for shared_buffers, so start by cutting back the shared_buffers for ALL PG servers to 6GB (max. 4 or 5GB is probably better)  and things should be happier.

On Thu, Apr 2, 2015 at 8:24 PM, Dzmitry Nikitsin <dzmitry.nikitsin@gmail.com> wrote:
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS, 

Master configuration: 

default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300


Slave configuration: 


max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100


I am using XSF file system, size of my database – 168GB. 

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320

kernel.shmmax = 32212254720

kernel.shmmni = 4096


Master is primary to write data. Slave – for reporting. In reality I am using not more then 100 connections to slave server at the same time. Writing about 3000 records in a minute. 


I have one table where I writing statistics, that is portioned by month. Below is table size with biggest relations 

public.stats_201408               | 9212 MB

 public.stats_201503               | 8868 MB

 pg_toast.pg_toast_6404464             | 8319 MB

 pg_toast.pg_toast_317921              | 7520 MB

 public.stats_201409               | 7101 MB

 public.stats_201412               | 4458 MB


I see here pg_toast, from doc I read it’s large objects there, but It’s not related to my table stats, which is read/write heavy(type of the biggest column in this table => character varying(3000)). I.e. - it’s related to different table. 


My application create 100 connections & keeping them during whole life cycle(usually until next deploy – that may happen in couple days), with time – connection growing in memory(checking using htop) & free memory going down. As result with time(usually 3-4 hours) my DB start throwing 


ERROR:  out of memory

DETAIL:  Failed on request of size 2048.


After I restart my application(reconnect to DB), it start working fine again. 


It even fail on simple query like: 


 SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at >= '2015-04-02 04:00:00.000000') AND (stats.created_at <= '2015-04-03 03:59:59.999999') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'



Any help appreciated.








--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: ERROR: out of memory

От
"David G. Johnston"
Дата:
On Thursday, April 2, 2015, Melvin Davidson <melvin6925@gmail.com> wrote:
Well right of the bat, if your master shared_buffers = 7GB and 3 slaves shared_buffers = 10GB, that is 37GB total, which means you are guaranteed to exceed the 30GB physical limit on your machine. 

I don't get why you are adding these together.  There isn't any reason to assume the slaves are virtual machines sharing the same 30gb as opposed to each having 30gb for 120gb total between all 4.

David J.


Re: ERROR: out of memory

От
Dzmitry Nikitsin
Дата:
it’s 4 different servers. 

From: "David G. Johnston" <david.g.johnston@gmail.com>
Date: Thursday, April 2, 2015 at 9:37 PM
To: Melvin Davidson <melvin6925@gmail.com>
Cc: Bob Jones <dzmitry.nikitsin@gmail.com>, "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] ERROR: out of memory

On Thursday, April 2, 2015, Melvin Davidson <melvin6925@gmail.com> wrote:
Well right of the bat, if your master shared_buffers = 7GB and 3 slaves shared_buffers = 10GB, that is 37GB total, which means you are guaranteed to exceed the 30GB physical limit on your machine. 

I don't get why you are adding these together.  There isn't any reason to assume the slaves are virtual machines sharing the same 30gb as opposed to each having 30gb for 120gb total between all 4.

David J.


Re: ERROR: out of memory

От
Dzmitry Nikitsin
Дата:
Thank you David. I see some queries running for 10+ seconds, but I do not have transactions there, it’s just select queries. More thoughts ?

Thanks,
 Dzmitry 

From: "David G. Johnston" <david.g.johnston@gmail.com>
Date: Thursday, April 2, 2015 at 8:57 PM
To: Bob Jones <dzmitry.nikitsin@gmail.com>
Cc: "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] ERROR: out of memory



On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin <dzmitry.nikitsin@gmail.com> wrote:
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS, 

Master configuration: 

default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300


Slave configuration: 


max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100


I am using XSF file system, size of my database – 168GB. 

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320

kernel.shmmax = 32212254720

kernel.shmmni = 4096


Master is primary to write data. Slave – for reporting. In reality I am using not more then 100 connections to slave server at the same time. Writing about 3000 records in a minute. 


I have one table where I writing statistics, that is portioned by month. Below is table size with biggest relations 

public.stats_201408               | 9212 MB

 public.stats_201503               | 8868 MB

 pg_toast.pg_toast_6404464             | 8319 MB

 pg_toast.pg_toast_317921              | 7520 MB

 public.stats_201409               | 7101 MB

 public.stats_201412               | 4458 MB


I see here pg_toast, from doc I read it’s large objects there, but It’s not related to my table stats, which is read/write heavy(type of the biggest column in this table => character varying(3000)). I.e. - it’s related to different table. 


My application create 100 connections & keeping them during whole life cycle(usually until next deploy – that may happen in couple days), with time – connection growing in memory(checking using htop) & free memory going down. As result with time(usually 3-4 hours) my DB start throwing 


ERROR:  out of memory

DETAIL:  Failed on request of size 2048.


After I restart my application(reconnect to DB), it start working fine again. 


It even fail on simple query like: 


 SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at >= '2015-04-02 04:00:00.000000') AND (stats.created_at <= '2015-04-03 03:59:59.999999') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'



Any help appreciated.

​Start here:


Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and permanently connected to the database but many of them are also keeping open transactions.

​I'd also drop any preconception about which tables you think are problematic - since likely the tables themselves are not the issue.​

David J.​
 

Re: ERROR: out of memory

От
Dzmitry Nikitsin
Дата:
Actually I checked it wrong, state for queries I mentioned is idle, I.e. - they are showing previous transaction, so I do not see any long running transactions right now.

Thanks,
  Dzmitry 

From: "David G. Johnston" <david.g.johnston@gmail.com>
Date: Thursday, April 2, 2015 at 8:57 PM
To: Bob Jones <dzmitry.nikitsin@gmail.com>
Cc: "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
Subject: Re: [GENERAL] ERROR: out of memory



On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin <dzmitry.nikitsin@gmail.com> wrote:
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS, 

Master configuration: 

default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300


Slave configuration: 


max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100


I am using XSF file system, size of my database – 168GB. 

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320

kernel.shmmax = 32212254720

kernel.shmmni = 4096


Master is primary to write data. Slave – for reporting. In reality I am using not more then 100 connections to slave server at the same time. Writing about 3000 records in a minute. 


I have one table where I writing statistics, that is portioned by month. Below is table size with biggest relations 

public.stats_201408               | 9212 MB

 public.stats_201503               | 8868 MB

 pg_toast.pg_toast_6404464             | 8319 MB

 pg_toast.pg_toast_317921              | 7520 MB

 public.stats_201409               | 7101 MB

 public.stats_201412               | 4458 MB


I see here pg_toast, from doc I read it’s large objects there, but It’s not related to my table stats, which is read/write heavy(type of the biggest column in this table => character varying(3000)). I.e. - it’s related to different table. 


My application create 100 connections & keeping them during whole life cycle(usually until next deploy – that may happen in couple days), with time – connection growing in memory(checking using htop) & free memory going down. As result with time(usually 3-4 hours) my DB start throwing 


ERROR:  out of memory

DETAIL:  Failed on request of size 2048.


After I restart my application(reconnect to DB), it start working fine again. 


It even fail on simple query like: 


 SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at >= '2015-04-02 04:00:00.000000') AND (stats.created_at <= '2015-04-03 03:59:59.999999') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'



Any help appreciated.

​Start here:


Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and permanently connected to the database but many of them are also keeping open transactions.

​I'd also drop any preconception about which tables you think are problematic - since likely the tables themselves are not the issue.​

David J.​