Обсуждение: shared_buffers on Big RAM systems

Поиск
Список
Период
Сортировка

shared_buffers on Big RAM systems

От
Ron
Дата:
Hi,

https://www.postgresql.org/docs/9.6/runtime-config-resource.html

The docs say, "If you have a dedicated database server with 1GB or more of 
RAM, a reasonable starting value for shared_buffers is 25%".

But that's pretty archaic in 2018.  What if the dedicated database server 
has 128GB RAM?

Thanks.

-- 
Angular momentum makes the world go 'round.


Re: shared_buffers on Big RAM systems

От
Rene Romero Benavides
Дата:
This topic seems to be always open to discussion. In my opinion, it depends on how big your work dataset is, there's no use in sizing shared_buffers beyond that size. I think, the most reasonable thing is analyzing each case as proposed here:

Re: shared_buffers on Big RAM systems

От
Thomas Munro
Дата:
On Fri, Dec 14, 2018 at 2:17 AM Ron <ronljohnsonjr@gmail.com> wrote:
> https://www.postgresql.org/docs/9.6/runtime-config-resource.html
>
> The docs say, "If you have a dedicated database server with 1GB or more of
> RAM, a reasonable starting value for shared_buffers is 25%".
>
> But that's pretty archaic in 2018.  What if the dedicated database server
> has 128GB RAM?

I agree, we might as well drop the words "with 1GB of more of RAM".
That's the size of the very smallest cloud instances available these
days, available for free or up to a few bucks a month, and for
physical servers I wonder if you can still get DIMMs that small.

-- 
Thomas Munro
http://www.enterprisedb.com


Installing pg_buffercache (was Re: shared_buffers on Big RAM systems)

От
Ron
Дата:
On 12/13/2018 08:25 PM, Rene Romero Benavides wrote:
> This topic seems to be always open to discussion. In my opinion, it 
> depends on how big your work dataset is, there's no use in sizing 
> shared_buffers beyond that size. I think, the most reasonable thing is 
> analyzing each case as proposed here:
> https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/

https://www.postgresql.org/docs/current/pgbuffercache.html

Is this an extension or a shared preload library?  The documentation doesn't 
specify.

Thanks

-- 
Angular momentum makes the world go 'round.


Re: Installing pg_buffercache (was Re: shared_buffers on Big RAM systems)

От
Guillaume Lelarge
Дата:
Le ven. 14 déc. 2018 à 07:00, Ron <ronljohnsonjr@gmail.com> a écrit :
On 12/13/2018 08:25 PM, Rene Romero Benavides wrote:
> This topic seems to be always open to discussion. In my opinion, it
> depends on how big your work dataset is, there's no use in sizing
> shared_buffers beyond that size. I think, the most reasonable thing is
> analyzing each case as proposed here:
> https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/

https://www.postgresql.org/docs/current/pgbuffercache.html

Is this an extension or a shared preload library?  The documentation doesn't
specify.


It's an extension.


--
Guillaume.

Re: shared_buffers on Big RAM systems

От
Олег Самойлов
Дата:
I tested. The shared buffers works better, then an OS level filesystem cache. The more shared_buffers (but less then
databasesize), the better. With huge_pages is more better. But you must reserve enough free memory for OS and PostgeSQL
itself.  

> 13 дек. 2018 г., в 18:17, Ron <ronljohnsonjr@gmail.com> написал(а):
>
> Hi,
>
> https://www.postgresql.org/docs/9.6/runtime-config-resource.html
>
> The docs say, "If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for
shared_buffersis 25%". 
>
> But that's pretty archaic in 2018.  What if the dedicated database server has 128GB RAM?
>
> Thanks.
>
> --
> Angular momentum makes the world go 'round.
>




Re: shared_buffers on Big RAM systems

От
Jeff Janes
Дата:
On Thu, Dec 13, 2018 at 11:51 PM Thomas Munro <thomas.munro@enterprisedb.com> wrote:
On Fri, Dec 14, 2018 at 2:17 AM Ron <ronljohnsonjr@gmail.com> wrote:
> https://www.postgresql.org/docs/9.6/runtime-config-resource.html
>
> The docs say, "If you have a dedicated database server with 1GB or more of
> RAM, a reasonable starting value for shared_buffers is 25%".
>
> But that's pretty archaic in 2018.  What if the dedicated database server
> has 128GB RAM?

I agree, we might as well drop the words "with 1GB of more of RAM".
That's the size of the very smallest cloud instances available these
days, available for free or up to a few bucks a month, and for
physical servers I wonder if you can still get DIMMs that small.

AWS still has some with 512MB.  Although it can be a challenge to get anything to compile in that amount of memory if there is anything else running.

But I don't think I would recommend starting at 25% of RAM larger server.  Is that really good advice?  I would usually start out at 1GB even if the server has 128GB, and increase it only if there was evidence it needed to be increased.  Due to double buffering between shared_buffers and OS cache, 25% seems like a lot of wasted space.  You need shared_buffers as a cooling off tank where dirty data can wait for their corresponding WAL to get flushed in the background before they get written out themselves.  I think 1GB is enough for this, even if you have 128GB of RAM.

If your entire database (or the active portion of it) fits in RAM, then it probably makes sense to set shared_buffers high enough to hold your entire database.  But if it doesn't fit in RAM, then I don't see a reason to devote even 25% of a large server to shared_buffers.

Re: shared_buffers on Big RAM systems

От
Andres Freund
Дата:
Hi,

On 2019-04-11 15:39:15 -0400, Jeff Janes wrote:
> But I don't think I would recommend starting at 25% of RAM larger server.
> Is that really good advice?  I would usually start out at 1GB even if the
> server has 128GB, and increase it only if there was evidence it needed to
> be increased.  Due to double buffering between shared_buffers and OS cache,
> 25% seems like a lot of wasted space.  You need shared_buffers as a cooling
> off tank where dirty data can wait for their corresponding WAL to get
> flushed in the background before they get written out themselves.  I think
> 1GB is enough for this, even if you have 128GB of RAM.

That runs very much contrary to my experience. If you actually gets
writes into your cluster, having a small shared buffers will create a
vastly larger amount of total writes. Because everytime a page is
evicted from shared buffers, it'll shortly afterwards be written out to
disk by the OS. Whereas that would not happen in shared buffers.

Due to checkpoint sorting (~9.6?) writes from checkpointer are also
vastly more efficient than either bgwriter triggered, or backend
triggered writes, because it's much more likely that the OS / IO stack
will write combine them.

I think with the exception of workloads that have a lot of trunctions
(e.g. tests that create/drop schemas) that are slow due to the implied
shared buffer scan, a lot of the problems with large shared buffers have
been fixed. Far from perfect, of course (i.e. the double buffering
youmention).

Greetings,

Andres Freund