Обсуждение: shared_buffer value

Поиск
Список
Период
Сортировка

shared_buffer value

От
"Anjan Dave"
Дата:
Gurus,
 
I have defined the following values on a db:
 
shared_buffers = 10240          # 10240 = 80MB
max_connections = 100
sort_mem = 1024                 # 1024KB is 1MB per operation
effective_cache_size = 262144   # equals to 2GB for 8k pages
 
Rest of the values are unchanged from default.
 
 
The poweredge 2650 machine has 4GB RAM, and the size of the database (size of 'data' folder) is about 5GB. PG is 7.4, RH9.
 
The machine has been getting quite busy (when, say, 50 students login at the same time, when others have logged in already) and is maxing out at 100 connections (will increase this tonight probably to 200). We have been getting "too many clients" message upon trying to connect. Once connected, the pgmonitor, and the 'pg_stat_activity' show connections reaching about 100.
 
There's a series of SELECT and UPDATE statements that get called for when a group of users log in simultaneously...and for some reason, many of them stay there for a while...
 
During that time, if i do a 'top', i can see multiple postmaster processes, each about 87MB in size. The Memory utilization drops down to about 30MB free, and i can see a little bit of swap utilization in vmstat then.
 
Question is, does the 80MB buffer allocation correspond to ~87MB per postmaster instance? (with about 100 instances of postmaster, that will be about 100 x 80MB = 8GB??)
 
Should i decrease the buffer value to about 50MB and monitor?
 
Interestingly, at one point, we vacuumed the database, and the size reported by 'df -k' on the pgsql slice dropped very significantly...guess, it had been using a lot of temp files?
 
Further steps will be to add more memory, and possibly drop/recreate a couple of indexes that are used in the UPDATE statements.
 
 
Thanks in advance for any inputs.
-Anjan
 
**************************************************************************

This e-mail and any files transmitted with it are intended for the use of the addressee(s) only and may be confidential and covered by the attorney/client and other privileges. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.

 

Re: shared_buffer value

От
Richard Huxton
Дата:
On Thursday 15 January 2004 22:49, Anjan Dave wrote:
> Gurus,
>
> I have defined the following values on a db:
>
> shared_buffers = 10240          # 10240 = 80MB
> max_connections = 100
> sort_mem = 1024                 # 1024KB is 1MB per operation
> effective_cache_size = 262144   # equals to 2GB for 8k pages
>
> Rest of the values are unchanged from default.
>
>
> The poweredge 2650 machine has 4GB RAM, and the size of the database
> (size of 'data' folder) is about 5GB. PG is 7.4, RH9.

OK - settings don't look unreasonable so far.

> The machine has been getting quite busy (when, say, 50 students login at
> the same time, when others have logged in already) and is maxing out at
> 100 connections (will increase this tonight probably to 200). We have
> been getting "too many clients" message upon trying to connect. Once
> connected, the pgmonitor, and the 'pg_stat_activity' show connections
> reaching about 100.
>
> There's a series of SELECT and UPDATE statements that get called for
> when a group of users log in simultaneously...and for some reason, many
> of them stay there for a while...
>
> During that time, if i do a 'top', i can see multiple postmaster
> processes, each about 87MB in size. The Memory utilization drops down to
> about 30MB free, and i can see a little bit of swap utilization in
> vmstat then.

On linux you'll see three values: SIZE, RSS and SHARE. SIZE is what you're
looking at, RSS is resident set size (it's in main memory) and SHARE is how
much is shared with other processes. So - 3 processes each with RSS=15MB,
SIZE=10MB take up 10+5+5+5 = 25MB.
Don't worry about a tiny bit of swap - how is your buff/cache doing then?

> Should i decrease the buffer value to about 50MB and monitor?

That shared_buffer is between all backends. The sort_mem however, is *per
sort*, not even per backend. So - if a complicated query uses four sorts you
could use 4MB in one backend.

> Interestingly, at one point, we vacuumed the database, and the size
> reported by 'df -k' on the pgsql slice dropped very
> significantly...guess, it had been using a lot of temp files?

You need to run VACUUM regularly to reclaim unused space. Since you're on 7.4,
take a look at the pg_autovacuum utility, or start by running VACUUM ANALYZE
from a cron job every evening. Perhaps a VACUUM FULL at weekends?

> Further steps will be to add more memory, and possibly drop/recreate a
> couple of indexes that are used in the UPDATE statements.

A REINDEX might be worthwhile. Details on this and VACUUM in the manuals.


--
  Richard Huxton
  Archonet Ltd

Re: shared_buffer value

От
Tom Lane
Дата:
"Anjan Dave" <adave@vantage.com> writes:
> Question is, does the 80MB buffer allocation correspond to ~87MB per
> postmaster instance? (with about 100 instances of postmaster, that will
> be about 100 x 80MB =3D 8GB??)

Most likely, top is counting some portion of the shared memory block
against each backend process.  This behavior is platform-specific,
however, and you did not tell us what platform you're on.

> Interestingly, at one point, we vacuumed the database, and the size
> reported by 'df -k' on the pgsql slice dropped very
> significantly...guess, it had been using a lot of temp files?

"At one point"?  If your setup doesn't include *routine* vacuuming,
you are going to have problems with file bloat.  This isn't something
you can do just when you happen to remember it --- it needs to be driven
off a cron job or some such.  Or use the contrib autovacuum daemon.
You want to vacuum often enough to keep the database size more or less
constant.

            regards, tom lane

Re: shared_buffer value

От
"Anjan Dave"
Дата:
Sorry I wasn't clear. We do have nightly vacuum crons defined on all pg
servers. Apparently, this one had been taking many hours to finish
recently, and we did an additional vacuum during day time when there was
low volume, which finished quickly.

The platform I mentioned is RedHat 9, PG7.4, on Dell PowerEdge2650.

Here's the output of 'top' below, taken just now. I can capture the
stats during peak time in the afternoon also:

68 processes: 67 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states:   3.1% user   4.4% system    0.0% nice   0.0% iowait  92.0%
idle
CPU1 states:   0.0% user   3.2% system    0.0% nice   0.0% iowait  96.3%
idle
CPU2 states:   0.4% user   0.3% system    0.0% nice   0.0% iowait  98.3%
idle
CPU3 states:   0.3% user   1.0% system    0.0% nice   0.0% iowait  98.2%
idle
Mem:  3874188k av, 3622296k used,  251892k free,       0k shrd,  322372k
buff
                   2369836k actv,  454984k in_d,   44568k in_c
Swap: 4096532k av,   24552k used, 4071980k free                 2993384k
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU
COMMAND
 4258 postgres  16   0 88180  86M 85796 S     2.1  2.2  14:55   0
postmaster
 5260 postgres  15   0 85844  83M 84704 S     0.0  2.2   2:51   1
postmaster
14068 root      23   0 69240  67M  2164 S     3.9  1.7  59:44   2 wish
 3157 postgres  15   0 50364  49M 48484 S     0.0  1.2   0:02   3
postmaster
 2174 postgres  15   0 50196  48M 48380 S     0.1  1.2   0:00   0
postmaster
 3228 postgres  15   0 49292  48M 47536 S     0.0  1.2   0:00   3
postmaster
 3050 postgres  15   0 49184  47M 47364 S     0.5  1.2   0:00   2
postmaster
 2725 postgres  15   0  7788 7688  6248 S     0.0  0.1   0:00   3
postmaster
 3600 postgres  16   0  5812 5700  4784 S     0.0  0.1   0:00   3
postmaster
 1342 gdm       15   0 12988 5560  2056 S     0.0  0.1  19:36   3
gdmgreeter

According to top's man,
RSS is: The total amount of physical memory used by  the  task,  in
kilo-bytes...
SHARE is: The amount of shared memory used by the task is shown...
SIZE is: The size of the task's code plus data plus stack space,  in
kilo-bytes...

I am not sure how do I calculate whether 80MB shared_buffer (in
postgresql.conf)should be increased or decreased from the above values,
because during higher loads, the number of postmaster instances go up to
100 (limited by max connections), each at an RSS of about 87MB...

Thanks,
anjan

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Thursday, January 15, 2004 7:52 PM
To: Anjan Dave
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] shared_buffer value


"Anjan Dave" <adave@vantage.com> writes:
> Question is, does the 80MB buffer allocation correspond to ~87MB per
> postmaster instance? (with about 100 instances of postmaster, that
> will be about 100 x 80MB =3D 8GB??)

Most likely, top is counting some portion of the shared memory block
against each backend process.  This behavior is platform-specific,
however, and you did not tell us what platform you're on.

> Interestingly, at one point, we vacuumed the database, and the size
> reported by 'df -k' on the pgsql slice dropped very
> significantly...guess, it had been using a lot of temp files?

"At one point"?  If your setup doesn't include *routine* vacuuming, you
are going to have problems with file bloat.  This isn't something you
can do just when you happen to remember it --- it needs to be driven off
a cron job or some such.  Or use the contrib autovacuum daemon. You want
to vacuum often enough to keep the database size more or less constant.

            regards, tom lane

Re: shared_buffer value

От
"scott.marlowe"
Дата:
On Fri, 16 Jan 2004, Anjan Dave wrote:

> 68 processes: 67 sleeping, 1 running, 0 zombie, 0 stopped
> CPU0 states:   3.1% user   4.4% system    0.0% nice   0.0% iowait  92.0%
> idle
> CPU1 states:   0.0% user   3.2% system    0.0% nice   0.0% iowait  96.3%
> idle
> CPU2 states:   0.4% user   0.3% system    0.0% nice   0.0% iowait  98.3%
> idle
> CPU3 states:   0.3% user   1.0% system    0.0% nice   0.0% iowait  98.2%
> idle
> Mem:  3874188k av, 3622296k used,  251892k free,       0k shrd,  322372k
> buff
>                    2369836k actv,  454984k in_d,   44568k in_c
> Swap: 4096532k av,   24552k used, 4071980k free                 2993384k
> cached

Note that that machine has 2993384k of kernel cache.  This means that
after all that it's doing, there's about 3 gigs of free memory, and the
kernel is just using it to cache files.  Should a process need that
memory, the kernel would free it right up.

So, you don't have to worry about setting the buffers too high in
postgresql and running out of memory, you're not even close.

I'd crank up sort mem to 4 or 8 meg or so, and the shared buffers to
something a little higher, say 5000 to 10000 or so.  Note that there is a
point of diminishing returns in postgresql where if you allocate too much
buffer memory, it gets slower than just letting the kernel do it.

>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU
> COMMAND
>  4258 postgres  16   0 88180  86M 85796 S     2.1  2.2  14:55   0

this says that this process is using 88 meg or so of ram, and of that 88
mef or so, 84 meg is shared between it and the other postgres processes.

>  5260 postgres  15   0 85844  83M 84704 S     0.0  2.2   2:51   1

Same here.  That means that this single process represents a delta of 1
meg or so.

>  3157 postgres  15   0 50364  49M 48484 S     0.0  1.2   0:02   3

Delta is about 2 meg.
and so forth.  I.e. you're not using 50 to 80 megs per process, only 2
megs or so, plus the 80 meg of shared memory.

> I am not sure how do I calculate whether 80MB shared_buffer (in
> postgresql.conf)should be increased or decreased from the above values,
> because during higher loads, the number of postmaster instances go up to
> 100 (limited by max connections), each at an RSS of about 87MB...

Generally, increase it until it doesn't make things go faster any more.
80 meg is pretty small, especially for a machine with 4 gigs of ram.  The
upper limit is generally found to be around 256 Meg or so, and that's what
we use on our machine at work.  Note this may make smaller queries slower,
since the overhead of maintaining a large buffer costs a bit, but it makes
larger queries faster, so it's a trade off.