Обсуждение: Why shared_buffers max is 8GB?

От:
Alexey Vasiliev
Дата:

I read from several sources, what maximum shared_buffers is 8GB.

Does this true? If yes, why exactly this number is maximum number of shared_buffers for good performance (on Linux 64-bits)?

Thanks!

От:
Ilya Kosmodemiansky
Дата:

Hi Alexey,

On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <> wrote:
> I read from several sources, what maximum shared_buffers is 8GB.

I believe that was an issue on some older versions, and thats why was
mentioned in several talks. Today it is a sort of apocrypha.

> Does this true? If yes, why exactly this number is maximum number of
> shared_buffers for good performance (on Linux 64-bits)?

25% of available RAM is a good idea to start. Sometimes, if you have
heavy workload _and_ it is possible to reside whole database in
memory, better to use something larger, about  ~75% of RAM.

Best regards,
Ilya
--
Ilya Kosmodemiansky,

PostgreSQL-Consulting.com
tel. +14084142500
cell. +4915144336040



От:
desmodemone
Дата:


Il 26/mar/2014 13:36 "Ilya Kosmodemiansky" <> ha scritto:
>
> Hi Alexey,
>
> On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <> wrote:
> > I read from several sources, what maximum shared_buffers is 8GB.
>
> I believe that was an issue on some older versions, and thats why was
> mentioned in several talks. Today it is a sort of apocrypha.
>
> > Does this true? If yes, why exactly this number is maximum number of
> > shared_buffers for good performance (on Linux 64-bits)?
>
> 25% of available RAM is a good idea to start. Sometimes, if you have
> heavy workload _and_ it is possible to reside whole database in
> memory, better to use something larger, about  ~75% of RAM.
>
> Best regards,
> Ilya
> --
> Ilya Kosmodemiansky,
>
> PostgreSQL-Consulting.com
> tel. +14084142500
> cell. +4915144336040
>
>
>
> --
> Sent via pgsql-performance mailing list ()
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

max is 1024mb.

you have to test your workload if it's too low you will get too much i/o ( the filesystem cache could help.. not always /*nfs*/),  if too high your cpu will be eated by lru/ latch/ and so on.

Mat Dba

От:
Albe Laurenz
Дата:

desmodemone wrote:

> max is 1024mb.

That must be a typo.
It can surely be much higher.

Yours,
Laurenz Albe

От:
Markella Skempri
Дата:

I wanted to follow up from this question. I’m running on 9.3.4
My DB server has 32GB ram so I have assigned 8GB shared_buffer_memory. It is quite a big db but with not much traffic. When there is traffic, it’s usually big.
 
Lately, the kernel has been killing the postmaster for having assigned too much shared memory. Latest crash was when loading a 500MB file.
 
Should I reduce the shared buffers in order for this to be more robust?
 
Thanks
Markella
 
Sent: Wednesday, March 26, 2014 12:45 PM
To:
Subject: Re: [PERFORM] Why shared_buffers max is 8GB?
 


Il 26/mar/2014 13:36 "Ilya Kosmodemiansky" <> ha scritto:
>
> Hi Alexey,
>
> On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <> wrote:
> > I read from several sources, what maximum shared_buffers is 8GB.
>
> I believe that was an issue on some older versions, and thats why was
> mentioned in several talks. Today it is a sort of apocrypha.
>
> > Does this true? If yes, why exactly this number is maximum number of
> > shared_buffers for good performance (on Linux 64-bits)?
>
> 25% of available RAM is a good idea to start. Sometimes, if you have
> heavy workload _and_ it is possible to reside whole database in
> memory, better to use something larger, about  ~75% of RAM.
>
> Best regards,
> Ilya
> --
> Ilya Kosmodemiansky,
>
> PostgreSQL-Consulting.com
> tel. +14084142500
> cell. +4915144336040
>
>
>
> --
> Sent via pgsql-performance mailing list ()
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

max is 1024mb.

you have to test your workload if it's too low you will get too much i/o ( the filesystem cache could help.. not always /*nfs*/),  if too high your cpu will be eated by lru/ latch/ and so on.

Mat Dba

От:
Martin French
Дата:

> max is 1024mb.
> you have to test your workload if it's too low you will get too much
> i/o ( the filesystem cache could help.. not always /*nfs*/),  if too
> high your cpu will be eated by lru/ latch/ and so on.

> Mat Dba


The max is most certainly NOT 1024MB.

http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html

Shared buffers is dependant on a number of other settings as well as workload and database size.

Have a look at pgtune for help with configuration.
=============================================

Romax Technology Limited
A limited company registered in England and Wales.
Registered office:
Rutherford House
Nottingham Science and Technology Park
Nottingham
NG7 2PZ
England
Registration Number: 2345696
VAT Number: 526 246 746

Telephone numbers:
+44 (0)115 951 88 00 (main)

For other office locations see:
http://www.romaxtech.com/Contact
=================================
===============
E-mail:
Website:
www.romaxtech.com
=================================

================
Confidentiality Statement
This transmission is for the addressee only and contains information that is confidential and privileged.
Unless you are the named addressee, or authorised to receive it on behalf of the addressee
you may not copy or use it, or disclose it to anyone else.
If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.
=================================================
От:
Martin French
Дата:


> I wanted to follow up from this question. I’m running on 9.3.4

> My DB server has 32GB ram so I have assigned 8GB
> shared_buffer_memory. It is quite a big db but with not much
> traffic. When there is traffic, it’s usually big.

>  
> Lately, the kernel has been killing the postmaster for having
> assigned too much shared memory. Latest crash was when loading a 500MB file.

>  
> Should I reduce the shared buffers in order for this to be more robust?
>  
> Thanks
> Markella

It may be that other memory settings are contributing towards this (work_mem, maintenance_work_mem, max_connections etc).

I would suggest that the OOM killer is working as intended and something is not quite right within the config.

You may want to review the memory consumption at peak times taking into consideration anything else running on the machine.
=============================================

Romax Technology Limited
A limited company registered in England and Wales.
Registered office:
Rutherford House
Nottingham Science and Technology Park
Nottingham
NG7 2PZ
England
Registration Number: 2345696
VAT Number: 526 246 746

Telephone numbers:
+44 (0)115 951 88 00 (main)

For other office locations see:
http://www.romaxtech.com/Contact
=================================
===============
E-mail:
Website:
www.romaxtech.com
=================================

================
Confidentiality Statement
This transmission is for the addressee only and contains information that is confidential and privileged.
Unless you are the named addressee, or authorised to receive it on behalf of the addressee
you may not copy or use it, or disclose it to anyone else.
If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.
=================================================
От:
Markella Skempri
Дата:

 
Thanks Martin,
However this is a database dedicated server and nothing much else is running on it. Also, I never saw this happening with 9.2 – but I can’t vouch for the size of  files that I was uploading.
 
Sent: Wednesday, March 26, 2014 1:32 PM
Subject: Re: [PERFORM] Why shared_buffers max is 8GB?
 

> I wanted to follow up from this question. I’m running on 9.3.4

> My DB server has 32GB ram so I have assigned 8GB
> shared_buffer_memory. It is quite a big db but with not much
> traffic. When there is traffic, it’s usually big.


> Lately, the kernel has been killing the postmaster for having
> assigned too much shared memory. Latest crash was when loading a 500MB file.


> Should I reduce the shared buffers in order for this to be more robust?

> Thanks
> Markella

It may be that other memory settings are contributing towards this (work_mem, maintenance_work_mem, max_connections etc).

I would suggest that the OOM killer is working as intended and something is not quite right within the config.

You may want to review the memory consumption at peak times taking into consideration anything else running on the machine.
=============================================

Romax Technology Limited
A limited company registered in England and Wales.
Registered office:
Rutherford House
Nottingham Science and Technology Park
Nottingham
NG7 2PZ
England
Registration Number: 2345696
VAT Number: 526 246 746

Telephone numbers:
+44 (0)115 951 88 00 (main)

For other office locations see:
http://www.romaxtech.com/Contact
=================================
===============
E-mail:
Website:
www.romaxtech.com
=================================

================
Confidentiality Statement
This transmission is for the addressee only and contains information that is confidential and privileged.
Unless you are the named addressee, or authorised to receive it on behalf of the addressee
you may not copy or use it, or disclose it to anyone else.
If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.
=================================================
От:
Martin French
Дата:

Markella Skempri <> wrote on 26/03/2014 13:47:47:

>  

> Thanks Martin,
> However this is a database dedicated server and nothing much else is
> running on it. Also, I never saw this happening with 9.2 – but I
> can’t vouch for the size of  files that I was uploading.

>  
> From: Martin French
> Sent: Wednesday, March 26, 2014 1:32 PM
> To: Markella Skempri
> Cc: desmodemone ; ; Alexey Vasiliev ;
> ;

> Subject: Re: [PERFORM] Why shared_buffers max is 8GB?
>  
>
> > I wanted to follow up from this question. I’m running on 9.3.4
> > My DB server has 32GB ram so I have assigned 8GB
> > shared_buffer_memory. It is quite a big db but with not much
> > traffic. When there is traffic, it’s usually big.
> >  
> > Lately, the kernel has been killing the postmaster for having
> > assigned too much shared memory. Latest crash was when loading a
> 500MB file.
> >  
> > Should I reduce the shared buffers in order for this to be more robust?
> >  
> > Thanks
> > Markella
>
> It may be that other memory settings are contributing towards this
> (work_mem, maintenance_work_mem, max_connections etc).
>
> I would suggest that the OOM killer is working as intended and
> something is not quite right within the config.
>
> You may want to review the memory consumption at peak times taking
> into consideration anything else running on the machine.
> =============================================
>


My course of action here would be to halve all the memory settings and benchmark performance, then increment in small amounts periodically repeating the file load situation until the crash happens again, obviously with a keen eye on the postgres logs as well as on the syslog output.


The other *potentially dangerous* thing you could do is to alter the OOM killers behaviour in a couple of ways (assuming PostgreSQL wasn't compiled with -DLINUX_OOM_ADJ=0).

This can be done like so:
1.        Alter the Kernel behaviour with:
sysctl -w vm.overcommit_memory=2
2.        Tell the Kernel not to kill Postmaster Process.
echo -17 >> /proc/$(ps -ef | grep postmaster | grep -v grep | awk '{print $2}')/oom_adj

I cannot state enough that this could cause unpredictable behaviour of the OOM killer, and thus; the box itself, and is not 100% guaranteed to stop the OOM killer taking Postgres out.

=============================================

Romax Technology Limited
A limited company registered in England and Wales.
Registered office:
Rutherford House
Nottingham Science and Technology Park
Nottingham
NG7 2PZ
England
Registration Number: 2345696
VAT Number: 526 246 746

Telephone numbers:
+44 (0)115 951 88 00 (main)

For other office locations see:
http://www.romaxtech.com/Contact
=================================
===============
E-mail:
Website:
www.romaxtech.com
=================================

================
Confidentiality Statement
This transmission is for the addressee only and contains information that is confidential and privileged.
Unless you are the named addressee, or authorised to receive it on behalf of the addressee
you may not copy or use it, or disclose it to anyone else.
If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.
=================================================
От:
Scott Marlowe
Дата:

On Wed, Mar 26, 2014 at 6:21 AM, Alexey Vasiliev <> wrote:
> I read from several sources, what maximum shared_buffers is 8GB.
>
> Does this true? If yes, why exactly this number is maximum number of
> shared_buffers for good performance (on Linux 64-bits)?

On most machines the limit is higher than you'd ever want to set it. I
have a set of servers with 1TB RAM and shared buffers on them is set
to 10G and even that is probably higher than it needs to be. The old
1/4 of memory advice comes from the days when db server memory was in
the 1 to 16GB range and even then it was more of a starting place. It
has been found through experience and experiment that few setups can
use more shared buffers than a few gigabytes and get better
performance.


On Wed, Mar 26, 2014 at 7:24 AM, Markella Skempri
<> wrote:
> I wanted to follow up from this question. I'm running on 9.3.4
> My DB server has 32GB ram so I have assigned 8GB shared_buffer_memory. It is
> quite a big db but with not much traffic. When there is traffic, it's
> usually big.
>
> Lately, the kernel has been killing the postmaster for having assigned too
> much shared memory. Latest crash was when loading a 500MB file.
>
> Should I reduce the shared buffers in order for this to be more robust?

It's not JUST your shared_buffers here. What are your changed settings
in postgresql.conf? Specifically work_mem, max_connections,
temp_buffers and to a lesser extent maintenance_work_mem.

Here's the thing. If you set shared_buffers, work_mem, and
max_connections too low you get a minor problem. Some apps can't
connect, pg is a little slow. If you set them too high you start
killing your DB with the OOM killer which is a major problem.


От:
desmodemone
Дата:

Yes, I rember was 1024Gb , sorry,


2014-03-26 14:23 GMT+01:00 Albe Laurenz <>:
desmodemone wrote:

> max is 1024mb.

That must be a typo.
It can surely be much higher.

Yours,
Laurenz Albe

От:
Shaun Thomas
Дата:

> On most machines the limit is higher than you'd ever want to set it. I
> have a set of servers with 1TB RAM and shared buffers on them is set
> to 10G and even that is probably higher than it needs to be. The old
> 1/4 of memory advice comes from the days when db server memory
> was in the 1 to 16GB range and even then it was more of a starting place. It
> has been found through experience and experiment that few setups
> can use more shared buffers than a few gigabytes and get better
> performance.

This is really the core of the issue. You can set shared_buffers to almost any level, into multiple TBs if you really
wantedto. Whether or not this is prudent however, is entirely different. There are many considerations at play with
sharedbuffers: 

* Shared buffers must (currently) compete with OS inode caches. If this is shared buffers are too high, much of the
cacheddata is already cached by the operating system, and you end up with wasted RAM. 
* Checkpoints must commit dirty shared buffers to disk. The larger this is, the more risk you have when checkpoints
come,up to and including an unresponsive database. Writing to disks isn't free, and sadly this is still on the slower
sideunless all of your storage is SSD-based. You don't want to set this too much higher than your disk write cache. 
* Performance gains taper off quickly. Most DBAs don't see gains after 4GB, and fewer still see any gains above 8GB. We
haveours set at 4GB after a lot of TPS and risk analysis. 
* Since shared_buffers is the amount of memory that could potentially remain uncommitted to data files, the larger this
is,the longer crash recovery can take. Having this too high could mean the difference between a five-minute outage, and
afive-second outage. The checkpoint_* settings control how this is distributed and maintained, but the risk starts
here.

With that said, we really need to update the WIKI page to reflect all of this. It's still claiming the 25% memory rule:

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email


От:
Ilya Kosmodemiansky
Дата:

On Wed, Mar 26, 2014 at 5:14 PM, Shaun Thomas <> wrote:
> * Checkpoints must commit dirty shared buffers to disk. The larger this is, the more risk you have when checkpoints
come,up to and including an unresponsive database. Writing to disks isn't free, and sadly this is still on the slower
sideunless all of your storage is SSD-based. You don't want to set this too much higher than your disk write cache. 

We use on some heavy working machines 48GB of shared buffers (and
sometimes more - depends on amount of RAM). Of course that works only
with good enough hardware raid with large bbu, well tuned linux (dirty
bytes appropriate to raid cache size etc) and aggressively tuned both
checkpoints and background writer:

 bgwriter_delay          | 10
 bgwriter_lru_maxpages   | 1000
 bgwriter_lru_multiplier | 10
checkpoint_completion_target | 0.9
 checkpoint_segments          | 300
 checkpoint_timeout           | 3600

and it really makes sense


--
Ilya Kosmodemiansky,

PostgreSQL-Consulting.com
tel. +14084142500
cell. +4915144336040



От:
Alexey Klyukin
Дата:


On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <> wrote:
I read from several sources, what maximum shared_buffers is 8GB.

Does this true? If yes, why exactly this number is maximum number of shared_buffers for good performance (on Linux 64-bits)?

Thanks!


I've seen cases when going higher than 8GB memory lead to the improved performance. Some of the server we are running has 128GB and 32GB shared_buffers with a better performance than one it had with 8GB.

One should be aware of several drawbacks:
- OOM killer (Linux). If you allocate more memory than you have on the system (+swap) and your vm.overcommit_memory setting is left to defaults (0), the postmaster will be killed by the Linux OOM killer. Set it to 2 and keep in mind other settings (work_mem, maintenance_work_mem, temp and wal buffers) when determining the shared buffer size.
- Checkpoints. In the worst case most of your shared buffers will be flushed to disk during checkpoint, affecting the overall system performance. Make sure bgwriter is actively and aggressively evicting dirty buffers and checkpoint is spread over the checkpoint_interval with the checkpoint_completion_target.
- Monitoring. One can use pg_buffercache to peek inside the shared buffers and see which relations are there and how big is the usage count. 

In most cases 8GB should be enough even for the servers with hundreds of GB of data, since the FS uses the rest of the memory as a cache (make sure you give a hint to the planner on how much memory is left for this with the effective_cache_size), but the exact answer is a matter of performance testing.

Now, the last question would be what was the initial justification for the 8GB barrier, I've heard that there were a lock congestion when dealing with huge pool of buffers, but I think that was fixed even in the pre-9.0 era.

--
Regards,
Alexey Klyukin
От:
Bruce Momjian
Дата:

On Wed, Apr  2, 2014 at 11:38:57AM +0200, Alexey Klyukin wrote:
> In most cases 8GB should be enough even for the servers with hundreds of GB of
> data, since the FS uses the rest of the memory as a cache (make sure you give a
> hint to the planner on how much memory is left for this with the
> effective_cache_size), but the exact answer is a matter of performance testing.
>
> Now, the last question would be what was the initial justification for the 8GB
> barrier, I've heard that there were a lock congestion when dealing with huge
> pool of buffers, but I think that was fixed even in the pre-9.0 era.

The issue in earlier releases was the overhead of managing more then 1
million 8k buffers.  I have not seen any recent tests to confirm that
overhead is still significant.

A larger issue is that going over 8GB doesn't help unless you are
accessing more than 8GB of data in a short period of time.  Add to that
the problem if potentially dirtying all the buffers and flushing it to a
now-smaller kernel buffer cache, and you can see why the 8GB limit is
recommended.

I do think this merits more testing against the current Postgres source
code.

--
  Bruce Momjian  <>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + Everyone has their own god. +