Re: how much memory to allot to postgres?

Поиск
Список
Период
Сортировка
От scott.marlowe
Тема Re: how much memory to allot to postgres?
Дата
Msg-id Pine.LNX.4.33.0301291352420.21061-100000@css120.ihs.com
обсуждение исходный текст
Ответ на how much memory to allot to postgres?  ("Brian Maguire" <bmaguire@vantage.com>)
Список pgsql-general
On Wed, 29 Jan 2003, Brian Maguire wrote:

> What is the suggested rule of thumb in allotting memory to postgres on a
> production level transactional database? 1/2, 3/4 ?
>
> If this was an example config, how much would you suggest allotting?
>
> Redhat 7.3
> 2 Gig Ram
> Dual Pentium 1.7 mhz

The standard recommendation is about 25% to 50% of available ram.  Note
that due to subtle inefficiencies in SYSV sharedmemory access versus the
generally higher performance of kernel caching, it is not just a
percentage issue.

I.e. SYSV sharedmem works well with Postgresql at settings using anywhere
up to about 256 Meg or so, but after that you have a kind of diminshing
returns problem where the speed to look things up in the internal buffers
being used by Postgresql versus asking the kernel for a page of data from
the hard drive that happens to be buffered.

My experience has been that 256 Megs is a nice sweet spot for our server,
which has 1.5 gigs of ram and runs lots of other stuff.  My machine looks
like this from free:

             total       used       free     shared    buffers     cached
Mem:       1543980    1460116      83864     266068      46484     859132
-/+ buffers/cache:     554500     989480
Swap:      2048208       2120    2046088

Note that well over 800 Megs of my memory is acting as kernel file cache,
and I'm using almost none of my virtual memory.  This is a good thing.

If your settings for buffers and such get high enough to make the machine
swap, you just had a meltdown so to speak, and the performance will drop
like a rock.  Since this is a knee effect, it's important to never go past
that point.

with little or no load this box can get about 270 tps from pgbench, and
with load on it from ldap, apache, etc... I get about 1/4 to 1/2 that.

Large queries that return 35 megs of data return on about 3 seconds (i.e.
select * from some_big_table with no where clause).

Queries that join several tables run well with this setup as well.

Watch out for setting sort mem too high.  Sort mem is the max amount of
memory an individual sort can allocate, so if a query causes two or three
sorts in parallel to happen, each one can grab sort_mem amount of memory.
More users doing the same query can each grab the same amount of memory
again.  Setting sort_mem to 256meg and then issueing several very large
queries can bring the whole machine to it's knees in a swap storm, making
it unresponsive.

So, the standard method is, come up with a test data set and queries that
are representative of what you'll do, throw them at the database, and
increase buffer mem until it doesn't help anymore, and pick a number
around the knee of helping more/not helping more.

For sort mem, I stick to 8192k which means I can handle a very high
parallel load without the machine buckling, but it's big enough to make
for fast sorts.

Be sure and test your queries in parallel, as parallel load is always the
real issue for most databases to handle well.


В списке pgsql-general по дате отправления:

Предыдущее
От: Medi Montaseri
Дата:
Сообщение: Re: [NOVICE] Perl - Postgres
Следующее
От: Tom Lane
Дата:
Сообщение: Re: how much memory to allot to postgres?