Обсуждение: max_connections / shared_buffers / effective_cache_size questions

Поиск
Список
Период
Сортировка

max_connections / shared_buffers / effective_cache_size questions

От
Puddle
Дата:
Hello, I'm a Sun Solaris sys admin for a start-up
company.  I've got the UNIX background, but now I'm
having to learn PostgreSQL to support it on our
servers :)

Server Background:

Solaris 10 x86
PostgreSQL 8.0.3
Dell PowerEdge 2650 w/4gb ram.
This is running JBoss/Apache as well (I KNOW the bad
juju of running it all on one box, but it's all we
have currently for this project). I'm dedicating 1gb
for PostgreSQL alone.

So, far I LOVE it compared to MySQL it's solid.

The only things I'm kind of confused about (and I've
been searching for answers on lot of good perf docs,
but not too clear to me) are the following:

1.) shared_buffers I see lot of reference to making
this the size of available ram (for the DB).  However,
I also read to make it the size of pgdata directory.

I notice when I load postgres each daemon is using the
amount of shared memory (shared_buffers).  Our current
dataset (pgdata) is 85mb in size.  So, I'm curious
should this size reflect the pgdata or the 'actual'
memory given?

I currently have this at 128mb

2.) effective_cache_size - from what I read this is
the 'total' allowed memory for postgresql to use
correct? So, if I am willing to allow 1GB of memory
should I make this 1GB?

3.) max_connections, been trying to figure 'how' to
determine this #.  I've read this is buffer_size+500k
per a connection.

ie.  128mb(buffer) + 500kb = 128.5mb per connection?

I was curious about 'sort_mem' I can't find reference
of it in the 8.0.3 documentation, has it been removed?

work_mem and max_stack_depth set to 4096
maintenance_work_mem set to 64mb

Thanks for any help on this.  I'm sure bombardment of
newbies gets old :)

-William



____________________________________________________
Yahoo! Sports
Rekindle the Rivalries. Sign up for Fantasy Football
http://football.fantasysports.yahoo.com

Re: max_connections / shared_buffers / effective_cache_size

От
John A Meinel
Дата:
Puddle wrote:

>Hello, I'm a Sun Solaris sys admin for a start-up
>company.  I've got the UNIX background, but now I'm
>having to learn PostgreSQL to support it on our
>servers :)
>
>Server Background:
>
>Solaris 10 x86
>PostgreSQL 8.0.3
>Dell PowerEdge 2650 w/4gb ram.
>This is running JBoss/Apache as well (I KNOW the bad
>juju of running it all on one box, but it's all we
>have currently for this project). I'm dedicating 1gb
>for PostgreSQL alone.
>
>So, far I LOVE it compared to MySQL it's solid.
>
>The only things I'm kind of confused about (and I've
>been searching for answers on lot of good perf docs,
>but not too clear to me) are the following:
>
>1.) shared_buffers I see lot of reference to making
>this the size of available ram (for the DB).  However,
>I also read to make it the size of pgdata directory.
>
>I notice when I load postgres each daemon is using the
>amount of shared memory (shared_buffers).  Our current
>dataset (pgdata) is 85mb in size.  So, I'm curious
>should this size reflect the pgdata or the 'actual'
>memory given?
>
>I currently have this at 128mb
>
>
You generally want shared_buffers to be no more than 10% of available
ram. Postgres expects the OS to do it's own caching. 128M/4G = 3% seems
reasonable to me. I would certainly never set it to 100% of ram.

>2.) effective_cache_size - from what I read this is
>the 'total' allowed memory for postgresql to use
>correct? So, if I am willing to allow 1GB of memory
>should I make this 1GB?
>
>
This is the effective amount of caching between the actual postgres
buffers, and the OS buffers. If you are dedicating this machine to
postgres, I would set it to something like 3.5G. If it is a mixed
machine, then you have to think about it.

This does not change how postgres uses RAM, it changes how postgres
estimates whether an Index scan will be cheaper than a Sequential scan,
based on the likelihood that the data you want will already be cached in
Ram.

If you dataset is only 85MB, and you don't think it will grow, you
really don't have to worry about this much. You have a very small database.

>3.) max_connections, been trying to figure 'how' to
>determine this #.  I've read this is buffer_size+500k
>per a connection.
>
>ie.  128mb(buffer) + 500kb = 128.5mb per connection?
>
>
Max connections is just how many concurrent connections you want to
allow. If you can get away with lower, do so.  Mostly this is to prevent
connections * work_mem to get bigger than your real working memory and
causing you to swap.

>I was curious about 'sort_mem' I can't find reference
>of it in the 8.0.3 documentation, has it been removed?
>
>
sort_mem changed to work_mem in 8.0, same thing with vacuum_mem ->
maintenance_work_mem.

>work_mem and max_stack_depth set to 4096
>maintenance_work_mem set to 64mb
>
>
Depends how much space you want to give per connection. 4M is pretty
small for a machine with 4G of RAM, but if your DB is only 85M it might
be plenty.
work_mem is how much memory a sort/hash/etc will use before it spills to
disk. So look at your queries. If you tend to sort most of your 85M db
in a single query, you might want to make it a little bit more. But if
all of your queries are very selective, 4M could be plenty.

I would make maintenance_work_mem more like 512M. It is only used for
CREATE INDEX, VACUUM, etc. Things that are not generally done by more
than one process at a time. And it's nice for them to have plenty of
room to run fast.

>Thanks for any help on this.  I'm sure bombardment of
>newbies gets old :)
>
>-William
>
>
Good luck,
John
=:->


Вложения

Re: max_connections / shared_buffers /

От
Rod Taylor
Дата:
> 1.) shared_buffers I see lot of reference to making
> this the size of available ram (for the DB).  However,
> I also read to make it the size of pgdata directory.

> 2.) effective_cache_size - from what I read this is
> the 'total' allowed memory for postgresql to use
> correct? So, if I am willing to allow 1GB of memory
> should I make this 1GB?

shared_buffers in your case should be about 10000. It is not taken on a
per connection basis, but is global for that cluster. Perhaps your
memory analysis tool is fooling with you?

effective_cache_size is what you want to set to the amount of ram that
you expect the kernel to use for caching the database information in
memory. PostgreSQL will not allocate this memory, but it will make
adjustments to the query execution methods (plan) chosen.

> 3.) max_connections, been trying to figure 'how' to
> determine this #.  I've read this is buffer_size+500k
> per a connection.

> ie.  128mb(buffer) + 500kb = 128.5mb per connection?

Max connections is the number of connections to the database you intend
to allow.

Shared_buffers must be of a certain minimum size to have that number of
connections, but the 10k number above should cover any reasonable
configurations.

> work_mem and max_stack_depth set to 4096
> maintenance_work_mem set to 64mb

Sort_mem and vacuum_mem became work_mem and maintenance_work_mem as
those terms better indicate what they really do.

> Thanks for any help on this.  I'm sure bombardment of
> newbies gets old :)

That's alright. We only request that once you have things figured out
that you, at your leisure, help out a few others.


--


Re: max_connections / shared_buffers / effective_cache_size questions

От
Puddle
Дата:
Thanks for the feedback guys.

The database will grow in size.  This first client
years worth of data was 85mb (test to proof of
concept).  The 05 datasets I expect to be much larger.

I think I may increase the work_mem and
maintenance_work_mem a bit more as suggested to.

I'm a bit still confused with max_connections.

I've been keeping the max_connections to the # of
Apache connections.  Since, this is all currently one
one box and it's a web-based application.  I wanted to
make sure it stuck with the same # of connections.
However, is there a formula or way to determine if a
current setup with memory etc to allow such
connections?

Exactly how is max_connections determined or is a
guess?

Again thanks for your help and Mr. Taylors.

Look forward to providing help when I got more a grasp
on things to !:)

-William

--- John A Meinel <john@arbash-meinel.com> wrote:

> Puddle wrote:
>
> >Hello, I'm a Sun Solaris sys admin for a start-up
> >company.  I've got the UNIX background, but now I'm
> >having to learn PostgreSQL to support it on our
> >servers :)
> >
> >Server Background:
> >
> >Solaris 10 x86
> >PostgreSQL 8.0.3
> >Dell PowerEdge 2650 w/4gb ram.
> >This is running JBoss/Apache as well (I KNOW the
> bad
> >juju of running it all on one box, but it's all we
> >have currently for this project). I'm dedicating
> 1gb
> >for PostgreSQL alone.
> >
> >So, far I LOVE it compared to MySQL it's solid.
> >
> >The only things I'm kind of confused about (and
> I've
> >been searching for answers on lot of good perf
> docs,
> >but not too clear to me) are the following:
> >
> >1.) shared_buffers I see lot of reference to making
> >this the size of available ram (for the DB).
> However,
> >I also read to make it the size of pgdata
> directory.
> >
> >I notice when I load postgres each daemon is using
> the
> >amount of shared memory (shared_buffers).  Our
> current
> >dataset (pgdata) is 85mb in size.  So, I'm curious
> >should this size reflect the pgdata or the 'actual'
> >memory given?
> >
> >I currently have this at 128mb
> >
> >
> You generally want shared_buffers to be no more than
> 10% of available
> ram. Postgres expects the OS to do it's own caching.
> 128M/4G = 3% seems
> reasonable to me. I would certainly never set it to
> 100% of ram.
>
> >2.) effective_cache_size - from what I read this is
> >the 'total' allowed memory for postgresql to use
> >correct? So, if I am willing to allow 1GB of memory
> >should I make this 1GB?
> >
> >
> This is the effective amount of caching between the
> actual postgres
> buffers, and the OS buffers. If you are dedicating
> this machine to
> postgres, I would set it to something like 3.5G. If
> it is a mixed
> machine, then you have to think about it.
>
> This does not change how postgres uses RAM, it
> changes how postgres
> estimates whether an Index scan will be cheaper than
> a Sequential scan,
> based on the likelihood that the data you want will
> already be cached in
> Ram.
>
> If you dataset is only 85MB, and you don't think it
> will grow, you
> really don't have to worry about this much. You have
> a very small database.
>
> >3.) max_connections, been trying to figure 'how' to
> >determine this #.  I've read this is
> buffer_size+500k
> >per a connection.
> >
> >ie.  128mb(buffer) + 500kb = 128.5mb per
> connection?
> >
> >
> Max connections is just how many concurrent
> connections you want to
> allow. If you can get away with lower, do so.
> Mostly this is to prevent
> connections * work_mem to get bigger than your real
> working memory and
> causing you to swap.
>
> >I was curious about 'sort_mem' I can't find
> reference
> >of it in the 8.0.3 documentation, has it been
> removed?
> >
> >
> sort_mem changed to work_mem in 8.0, same thing with
> vacuum_mem ->
> maintenance_work_mem.
>
> >work_mem and max_stack_depth set to 4096
> >maintenance_work_mem set to 64mb
> >
> >
> Depends how much space you want to give per
> connection. 4M is pretty
> small for a machine with 4G of RAM, but if your DB
> is only 85M it might
> be plenty.
> work_mem is how much memory a sort/hash/etc will use
> before it spills to
> disk. So look at your queries. If you tend to sort
> most of your 85M db
> in a single query, you might want to make it a
> little bit more. But if
> all of your queries are very selective, 4M could be
> plenty.
>
> I would make maintenance_work_mem more like 512M. It
> is only used for
> CREATE INDEX, VACUUM, etc. Things that are not
> generally done by more
> than one process at a time. And it's nice for them
> to have plenty of
> room to run fast.
>
> >Thanks for any help on this.  I'm sure bombardment
> of
> >newbies gets old :)
> >
> >-William
> >
> >
> Good luck,
> John
> =:->
>
>



____________________________________________________
Yahoo! Sports
Rekindle the Rivalries. Sign up for Fantasy Football
http://football.fantasysports.yahoo.com