Re: PostgreSQL settings for 12GB RAM

Поиск
Список
Период
Сортировка
От scott.marlowe
Тема Re: PostgreSQL settings for 12GB RAM
Дата
Msg-id Pine.LNX.4.33.0307071551330.5112-100000@css120.ihs.com
обсуждение исходный текст
Ответ на PostgreSQL settings for 12GB RAM  (Chris Miles <chris_pg002@psychofx.com>)
Ответы Re: PostgreSQL settings for 12GB RAM  (Andrew Sullivan <andrew@libertyrms.info>)
Список pgsql-admin
On Mon, 7 Jul 2003, Chris Miles wrote:

> Hi admins,
>
> We are running production PostgreSQL (7.2.3) servers on beefy
> systems: 4x Xeon 1.4GHz; and 12GB of RAM.
>
> We are trying to determine optimal settings for
> shared_buffers and the other memory-related performance
> tunables, to make the best use of our 12GB.  But we are not
> sure what limits we may hit, so we are trying to take care.
>
> The systems are Redhat Linux 7.3 / 2.4.20-18.7bigmem
>
> Currently our settings are:
>
> /proc/sys/kernel/shmall
> 536870912
> /proc/sys/kernel/shmmax
> 536870912
>
> postgresql.conf:
> shared_buffers = 32768

That's a good start.  Keep in mind you don't want postgresql caching
everything, that's the kernel's job.  It's a good idea to give postgresql
enough memory to cache the current working data set and let the kernel
handle the rest.  If you're making result sets of 10 meg or so, then 256
meg is plenty.  If you're running 100 Meg tables against other 100 meg
tables, then an increase in buffer might help.

This increase in buffers isn't free, since it will now cost Postgresql the
overhead of manageing said buffers, plus they're in unix shared memory,
which isn't all that fast compared to kernel level cache.

> max_connections = 768

I'd increase both of these if you do lots of updates.   Also schedule
hourly vacuums and analyzes (restrict them to busy tables if they take too
long to run).

> max_fsm_relations = 100 [default]
> max_fsm_pages = 10000 [default]

> wal_buffers = 32

Normally you want people to avoid using large sort mem, but you've 12 gigs
to play with, so I'd increase the sort_mem to something like 16 or 32
megs.

> sort_mem = 2048
> vacuum_mem = 32768
> wal_files = 32
> checkpoint_segments = 10

The next one here tells the planner about how much memory the kernel is
using to cache the data underneath the postgresql database.  It's measured
in 8k pages.  So, assuming your machine is capable of caching say 10 gigs,
that would be 1310720 blocks.  My machine let me set it to that, so it
should work.

> effective_cache_size = 1000
>
> This currently gives us 256 MB of PG shared buffers, which
> still seems way too conservative given the amount of RAM
> we have to play with.

effective cache size and kernel buffering should help.  Feel free to
increase the max on shared buffers up to about 1 or 2 gig.  I think 2 gig
is the limit on 32 bit hardware.  Keep in mind that operations involving
10 meg at a time will likely be MUCH more slow with 2 gigs of shared
buffers to keep track of.  So the key is to set the shared_mem big enough
to hold your large return sets without making the little ones crawl.

> Is there a limit on shmall/shmmax and how big we can make
> shared_buffers ?  Or should we just go ahead and give it
> 4, 6 or 8 GB of shared memory ?

On 64 bit systems, the limit is basically theoretical (i.e. 1/2 * 2^64 or
so).

> These servers are already quite busy, are serving multiple
> databases on each (one postgresql instance on each) with
> .../data/base/ at about 16GB in size currently (growing all
> the time).
>
> The kernel happily chews up all available RAM for I/O
> buffers, which is nice - but how much RAM is better utilized
> by PG shared buffers rather than kernel buffers ?

Only what pg needs.  Let the kernel do the heavy buffering.  Just make
sure and tell pgsql that it's doing it with effective_cache_size.

> Any tuning tips for servers this large would be appreciated.

"explain analyze" is your friend? :-)


В списке pgsql-admin по дате отправления:

Предыдущее
От: Joe Conway
Дата:
Сообщение: Re: Grant syntax
Следующее
От: "Devi Munandar"
Дата:
Сообщение: Re: exporting data