Re: how big shmmax is good for Postgres...

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: how big shmmax is good for Postgres...
Дата
Msg-id dcc563d10807100511s764b51fahac15fbd2fe932822@mail.gmail.com
обсуждение исходный текст
Ответ на Re: how big shmmax is good for Postgres...  ("Scott Marlowe" <scott.marlowe@gmail.com>)
Ответы Re: how big shmmax is good for Postgres...
Список pgsql-performance
I just wanted to add to my previous post that shared_memory generally
has a performance envelope of quickly increasing performance as you
first increase share_memory, then a smaller performance step with each
increase in shared_memory.  Once all of the working set of your data
fits, the return starts to fall off quickly.  Assuming you were on a
machine with infinite memory and there were no penalties for a large
shared_buffers, then you would go until you were comfortably in the
level area.  If your working set were 1G on a dedicated machine with
64G, you could assume that memory was functionally unlimited.
Somewhere around 1.1 Gigs of memory and you get no performance
increase.

In real life, maintaining a cache has costs too.  The kernel of most
OSes now caches data very well.  and it does it very well for large
chunks of data.  So on a machine that big, the OS would be caching the
whole dataset as well.

What we're interested in is the working set size.  If you typically
call up a couple k of data, band it around some and commit, then never
hit that section again for hours, and all your calls do that,  and the
access area is evenly spread across the database, then your working
set is the whole thing.  This is a common access pattern when dealing
with transactional systems.

If you commonly have 100 transactions doing that at once, then you
multiply much memory they use times 100 to get total buffers in use,
and the rest is likely NEVER going to get used.

In these systems, what seems like a bad idea, lowering the
buffer_size, might be the exact right call.

For session servers and large transactional systems, it's often best
to let the OS do the best caching of the most of the data, and have
enough shared buffers to handle 2-10 times the in memory data set
size.  This will result in a buffer size of a few hundred megabytes.

The advantage here is that the OS doesn't have to spend a lot of time
maintaining a large buffer pool and checkpoints are cheaper.  With
spare CPU the background writer can use spare I/O cycles to write out
the smaller number of dirty pages in shared_memory and the system runs
faster.

Same is true of session servers.  If a DB is just used for tracking
logged in users, it only needs a tiny amount of shared_buffers.

Conversely, when you need large numbers of shared_buffers is when you
have something like a large social networking site.  A LOT of people
updating a large data set at the same time likely need way more
shared_buffers to run well.  A user might be inputing data for several
minutes or even hours.  The same pages are getting hit over and over
too.  For this kind of app, you need as much memory as you can afford
to throw at the problem, and a semi fast large RAID array.  A large
cache means your RAID controller / array only have to write, on
average, as fast as the database commits it.

When you can reach the point where shared_buffers is larger than 1/2
your memory you're now using postgresql for caching more so than the
kernel.  As your shared_buffers goes past 75% of available memory you
now have three times as much cache under postgesql'l control than
under the OS's control.  This would mean you've already testing this
and found that postgresql's caching works better for you than the OS's
caching does.  Haven't seen that happen a lot.  Maybe some large
transactional systems would work well that way.

My point here, and I have one, is that larger shared_buffers only
makes sense if you can use them.  They can work against you in a few
different ways that aren't obvious up front.  Checkpointing, Self DOS
due to swap storm, using up memory that the kernel might be better at
using as cache, etc.

So the answer really is, do some realistic testing, with an eye
towards anamalous behavior.

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Scott Marlowe"
Дата:
Сообщение: Re: how big shmmax is good for Postgres...
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: Altering a column type - Most efficient way