Обсуждение: memory question

Поиск
Список
Период
Сортировка

memory question

От
"Campbell, Lance"
Дата:

PostgreSQL 8.4.3

Linux Redhat 5.0

 

Question: How much memory do I really need?

 

From my understanding there are two primary strategies for setting up PostgreSQL in relationship to memory:

 

1)      Rely on Linux to cache the files.  In this approach you set the shared_buffers to a relatively low number. 

2)      You can set shared_buffers to a very large percentage of your memory so that PostgreSQL reserves the memory for the database.

 

I am currently using option #1.  I have 24 Gig of memory on my server and the database takes up 17 Gig of disk space.  When I do the Linux command “top” I notice that 19 Gig is allocated for cache.  Is there a way for me to tell how much of that cache is associated with the caching of database files?

 

I am basically asking how much memory do I really need?  Maybe I have complete over kill.  Maybe I am getting to a point where I might need more memory.

 

My thought was I could use option #2 and then set the number to a lower amount.  If the performance is bad then slowly work the number up.

 

Our server manager seems to think that I have way to much memory.  He thinks that we only need 5 Gig.  I don’t really believe that.  But I want to cover myself.  With money tight I don’t want to be the person who is wasting resources.  We need to replace our database servers so I want to do the right thing.

 

Thanks,

 

Lance Campbell

Software Architect/DBA/Project Manager

Web Services at Public Affairs

217-333-0382

 

Re: memory question

От
Scott Marlowe
Дата:
On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance <lance@illinois.edu> wrote:
> PostgreSQL 8.4.3
>
> Linux Redhat 5.0
>
> Question: How much memory do I really need?

The answer is "as much as needed to hold your entire database in
memory and a few gig left over for sorts and backends to play in."

> From my understanding there are two primary strategies for setting up
> PostgreSQL in relationship to memory:
>
>
>
> 1)      Rely on Linux to cache the files.  In this approach you set the
> shared_buffers to a relatively low number.
>
> 2)      You can set shared_buffers to a very large percentage of your memory
> so that PostgreSQL reserves the memory for the database.

The kernel is better at caching large amounts of memory.  Pg is better
at handling somewhat smaller amounts and not flushing out random
access data for sequential access data.

> I am currently using option #1.  I have 24 Gig of memory on my server and
> the database takes up 17 Gig of disk space.  When I do the Linux command
> “top” I notice that 19 Gig is allocated for cache.  Is there a way for me to
> tell how much of that cache is associated with the caching of database
> files?

Probably nearly all of that 19G for cache is allocated for pg files.
Not sure how to tell off the top of my head though.

> I am basically asking how much memory do I really need?  Maybe I have
> complete over kill.  Maybe I am getting to a point where I might need more
> memory.

Actually, there are three levels of caching that are possible.  1:
Entire db, tables and indexes, can fit in RAM.  This is the fastest
method.  Worth the extra $ for RAM if you can afford it / db isn't too
huge.  2: Indexes can fit in RAM, some of tables can.  Still pretty
fast.  Definitely worth paying a little extra for.  3: Neither indexes
nor tables can wholly fit in RAM.  At this point the speed of your
large disk array becomes important, and you want a fast cachine RAID
controller.  Both of these items (disk array and RAID controller) are
considerably more costly than 16 or 32 Gigs of RAM.

> My thought was I could use option #2 and then set the number to a lower
> amount.  If the performance is bad then slowly work the number up.

I'm not sure what you mean.  Install less RAM and let PG do all the
caching?  Usually a bad idea. Usually.  I'm sure there are use cases
that it might be a good idea on.  But keep in mind, a large amount of
shared_buffers doesn't JUST buffer your reads, it also results in a
much large memory space to keep track of in terms of things that need
to get written out etc.  I'm actually about to reduce the
shared_buffers from 8G on one reporting server down to 1 or 2G cause
that's plenty, and it's having a hard time keeping up with the huge
checkpoints it's having to do.

> Our server manager seems to think that I have way to much memory.  He thinks
> that we only need 5 Gig.

How much do you absolutely need to boot up, run postgresql, and not
run out of memory?  That's what you "need" and it's probably around
1Gig.  It's just no less arbitraty than 5G.  Did he show you how he
arrived at this number?  If your DB is 17Gig on disk, it's foolish to
be cheap on memory.

> I don’t really believe that.  But I want to cover
> myself.  With money tight I don’t want to be the person who is wasting
> resources.  We need to replace our database servers so I want to do the
> right thing.

You can waste your time (valuable but sunk cost) other people's time
(more valuable, also sunk cost) or "waste" a few dollars on memory.
24Gig isn't that expensive really compared to say 10 seconds per
transaction for 100 users, 1000 times a day.  Or 11 user days in a
single day.  10s of seconds start to add up.

Re: memory question

От
Dave Crooke
Дата:
What Scott said ... seconded, all of it.

I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with 2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great. However, it's a modest workload, most of the database is archival for data mining, and the "working set" for routine OLTP is pretty modest and easily fits in the 2GB, and it's back-ended on to a pretty decent EMC Clariion FibreChannel array. Not the typical case.

For physical x86 servers, brand name (e.g. Kingston) ECC memory is down to $25 per GB in 4GB DIMMs, and $36 per GB in 8GB DIMMs .... dollars to doughnuts you have a server somewhere with 2GB or 4GB parts that can be pulled and replaced with double the density, et voila, an extra 16GB of RAM for about $500.

Lots and lots of RAM is absolutely, positively a no-brainer when trying to make a DB go fast. If for no other reason than people get all starry eyed at GHz numbers, almost all computers tend to be CPU heavy and RAM light in their factory configs. I build a new little server for the house every 3-5 years, using desktop parts, and give it a mid-life upgrade with bigger drives and doubling the RAM density.

Big banks running huge Oracle OLTP setups use the strategy of essentially keeping the whole thing in RAM .... HP shifts a lot of Superdome's maxed out with 2TB of RAM into this market - and that RAM costs a lot more than $25 a gig ;-)

Cheers
Dave



 

Re: memory question

От
Matthew Wakeling
Дата:
On Wed, 24 Mar 2010, Campbell, Lance wrote:
> I have 24 Gig of memory on my server...
>
> Our server manager seems to think that I have way to much memory.  He
> thinks that we only need 5 Gig.

You organisation probably spent more money getting your server manager to
investigate how much RAM you need and scaring you about wasting resources,
than it would cost to just slap 24GB in the machine.

24GB is the least amount of RAM I would consider putting in a new server
nowadays. It's so cheap.

Matthew

--
 Lord grant me patience, and I want it NOW!