Re: Debugging shared memory issues on CentOS

Поиск
Список
Период
Сортировка
От Mack Talcott
Тема Re: Debugging shared memory issues on CentOS
Дата
Msg-id CAPZbZCAd2PDbzaRdciL0ZLXbBBN7xMoZsKEd2os_VmazhQ+Bqw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Debugging shared memory issues on CentOS  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-performance
> There's quite a lot of stuff that a PG process will cache in local memory
> once it's acquired the info, for example:
> - relcache (relation descriptors)
> - catcache (system catalog entries)
> - compiled trees for plpgsql functions
>
> 300mb worth of that stuff seems on the high side, but perhaps you have
> lots and lots of tables, or lots and lots of functions?
>
> If this is the explanation then flushing that info just results in
> restarting from a cold-cache situation, which doesn't seem likely to
> be a win.  You're just going to be paying to read it in again.

It does seem a bit on the high side, but that makes sense.  There are
about 90 tables and 5 functions in each schema (all are identical),
but there are several infrequent queries for overall statistics that
do a union over all schemas (using UNION ALL).  That seems like the
most likely culprit, as there are ~500 of these schemas.

However, as the app serves a variety of customers, each request makes
queries in a different schema.  Seems like eventually these caches
would get pretty large even without the all-schema queries.

> Swappiness has nothing to do with disk cache.  Disk cache just means that
> the kernel is free to use any spare memory for copies of file pages it's
> read from disk lately.  This is almost always a good thing, because it
> saves reading those pages again if they're needed again.  And the key word
> there is "spare" --- the kernel is at liberty to drop those cached pages
> if it needs the memory for something more pressing.  So there's really no
> downside.  Trying to reduce that number is completely counterproductive.
> Rather, my observation was that if you had a gig and a half worth of RAM
> that the kernel felt it could afford to use for disk caching, then you
> weren't having much of a memory problem.  However, apparently that
> snapshot wasn't representative of your problem case:

I see.  So, maybe the kernel is _first_ determining that some of the
inactive processes' memory should be swapped out.  Then, since there
is free memory, it's being used for disk cache?

> I wonder if the short answer for this isn't that you should be using fewer
> backends by running a connection pooler.

If I can figure out the maximum number of connections that my server
can handle, that's definitely a possibility.

> If the backends want to cache a
> couple hundred meg worth of stuff, it's probably wise to let them do so.
> Or maybe you should just buy some more RAM.  8GB is pretty puny for a
> server these days (heck, the obsolete laptop I'm typing this mail on
> has half that much).

More memory is definitely a good solution.  This server is on EC2, and
I'm working on replacing it with an instance with twice as much.
However, my concern is that if I double the number of app servers to
handle higher load, I will run into the same issue.

I assume the memory of each process grows until it has all 90 tables
from all 500 schemas cached in some way.  Any ideas for optimizations
that would allow less memory usage in this case with many identical
schemas?  I'm guessing using views rather than select statements
wouldn't help.  Any postgres configs concerning caching I should take
a look at?  Different approaches to data organization?

Thanks, Tom.  I really appreciate your feedback!

>
>                         regards, tom lane


В списке pgsql-performance по дате отправления:

Предыдущее
От: Marc Cousin
Дата:
Сообщение: query plan not optimal
Следующее
От: Sev Zaslavsky
Дата:
Сообщение: slow query - will CLUSTER help?