Обсуждение: postgresql 9.1 out of memory

Поиск
Список
Период
Сортировка

postgresql 9.1 out of memory

От
Евгений Селявка
Дата:
Сolleagues can you help me with advice. I have

PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit

on

Centos 6.3 - 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.swappiness = 15
vm.overcommit_memory = 2
vm.overcommit_ratio = 90


Server HW:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    4
CPU socket(s):         2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 23
Stepping:              6
CPU MHz:               3000.105
BogoMIPS:              6000.04
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              6144K

32GB RAM

HDD 10k rpm SAS.

today my database crashed with out of memory
Server is standalone for DB, no other application work on that server.

Before server crash i have this parameters in config:

max_connections = 350
shared_buffers = 24GB
temp_buffers = 128MB
max_prepared_transactions = 350
work_mem = 2GB
maintenance_work_mem = 1GB
max_stack_depth = 4MB
effective_io_concurrency = 2
fsync = on
full_page_writes = off
checkpoint_segments = 64
checkpoint_timeout = 15min
checkpoint_completion_target = 0.75
checkpoint_warning = 50s
random_page_cost = 3.0
effective_cache_size = 14GB
autovacuum_max_workers = 16
autovacuum_vacuum_threshold = 900
autovacuum_analyze_threshold = 350
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05

After crash  i change this parameters:

shared_buffers = 8GB  1/4 from all available RAM
effective_cache_size = 16GB 50% of RAM
work_mem = 1GB
 
DB size is 25GB

What other steps can I take to avoid such behavior in the future?

--
С уважением Селявка Евгений

Re: postgresql 9.1 out of memory

От
Tom Lane
Дата:
=?KOI8-R?B?5dfHxc7JyiDzxczR18vB?= <evg.selyavka@gmail.com> writes:
> today my database crashed with out of memory

This is probably why:

> work_mem = 2GB

Don't do that.  It's especially silly to have this setting higher than
maintenance_work_mem --- should be the other way 'round.

I would also question your setting of shared_buffers.  The traditional
advice is to dedicate maybe a quarter of RAM to shared_buffers, not
three-quarters.

For more info try

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

            regards, tom lane


Re: postgresql 9.1 out of memory

От
David Johnston
Дата:
Two additional comments on your config:

max_connections = 350
Read up on connection poolers if you really think you need this many.
This inter-plays with work_mem due to concurrent processing effects.

max_prepared_transactions = 350
Really?  The default is zero and few people needs these...
I'm not sure if additional resources are consumed by this being enabled and
so large but the recommendation is to disable (leave at 0) this if you do
not use PREPARE TRANSACTION.

David J.







--
View this message in context:
http://postgresql.1045698.n5.nabble.com/postgresql-9-1-out-of-memory-tp5776003p5776018.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.


Re: postgresql 9.1 out of memory

От
Stephen Frost
Дата:
Евгений,

* Евгений Селявка (evg.selyavka@gmail.com) wrote:
> What other steps can I take to avoid such behavior in the future?

Monitor /proc/meminfo (specifically, Committed_AS vs. CommitLimit).  If
the amount of committed memory is getting up close to the commit limit,
you need to adjust something to use less memory- perhaps fewer shared
buffers, or lower work_mem, etc.

    Thanks,

        Stephen

Вложения

Re: postgresql 9.1 out of memory

От
Kevin Grittner
Дата:
Евгений Селявка <evg.selyavka@gmail.com> wrote:

> Server HW:

> 32GB RAM

> Before server crash i have this parameters in config:
>
> max_connections = 350

> work_mem = 2GB

> After crash  i change this parameters:

> work_mem = 1GB

A good "rule of thumb" is to allow for one work_mem allocation per
connection for a reasonable estimation of peak memory usage for
this purpose.  (This may need to be adjusted based on workload, but
it's a reasonable place to start.)  So you have adjusted likely
peaks down from 700GB to 350GB.

I usually start with work_mem set to RAM * 25% / max_connections.
In your case that works out to 23MB.  If you are able to use a
connection pooler to reducee max_connections, that will allow you
to boost work_mem.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: postgresql 9.1 out of memory

От
Евгений Селявка
Дата:



2013/10/31 Kevin Grittner <kgrittn@ymail.com>
Please resend with a copy to the list, rather than to just me.

-Kevin



From: Евгений Селявка <evg.selyavka@gmail.com>
To: Kevin Grittner <kgrittn@ymail.com>
Sent: Thursday, October 31, 2013 3:53 AM
Subject: Re: [ADMIN] postgresql 9.1 out of memory

Kevin thank you very much, really i set work_mem to 128MB. I monitor activity on my db cluster and there is about 100 concurrent connections. I think that my trouble could be also related with incorrect kernel setting.

vm.swappiness = 15
vm.overcommit_memory = 2
vm.overcommit_ratio = 90

I change to
vm.overcommit_memory = 0
vm.swappiness=30
vm.dirty_background_bytes = 67108864
vm.dirty_bytes = 536870912

But now is another problem when my fs cache grow up, and then clenup all db process freeaze. For example: duration of select 1 is about 1000-500 ms.

sar -B
12:00:01 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
...
01:50:01 PM      7.20    227.76  37726.31      0.00  18532.07      0.00      0.00      0.00      0.00
02:00:01 PM     10.27    246.87  37005.50      0.00  19326.56      0.00      0.00      0.00      0.00
02:10:01 PM      9.03    295.57  36891.37      0.00  19254.30      0.00      0.00      0.00      0.00
02:20:01 PM     53.56    251.34  54926.00      0.23  28884.46    353.07    521.79    874.83    100.00
02:30:01 PM     56.51    254.79  42914.24      0.01  26866.86      0.00      0.00      0.00      0.00
02:40:01 PM     49.96    298.62  46861.55      0.00  19883.76      0.00      0.00      0.00      0.00

sar -r
12:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
...
01:50:01 PM   1317664  31559528     95.99    477160  28905420  10197236     27.58
02:00:01 PM   1248996  31628196     96.20    478000  28917704  10217184     27.63
02:10:01 PM   1244356  31632836     96.22    478844  28929684  10223964     27.65
02:20:01 PM   3295792  29581400     89.98    420304  26938616  10233616     27.68
02:30:01 PM   3277168  29600024     90.03    421732  26979684  10228140     27.66
02:40:01 PM   3260916  29616276     90.08    423204  27017212  10214488     27.63

Could anybody help me with advice?





2013/10/30 Kevin Grittner <kgrittn@ymail.com>
Евгений Селявка <evg.selyavka@gmail.com> wrote:

> Server HW:

> 32GB RAM

> Before server crash i have this parameters in config:
>
> max_connections = 350

> work_mem = 2GB

> After crash  i change this parameters:

> work_mem = 1GB

A good "rule of thumb" is to allow for one work_mem allocation per
connection for a reasonable estimation of peak memory usage for
this purpose.  (This may need to be adjusted based on workload, but
it's a reasonable place to start.)  So you have adjusted likely
peaks down from 700GB to 350GB.

I usually start with work_mem set to RAM * 25% / max_connections.
In your case that works out to 23MB.  If you are able to use a
connection pooler to reducee max_connections, that will allow you
to boost work_mem.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



--
С уважением Селявка Евгений





--
С уважением Селявка Евгений