Re: Debugging shared memory issues on CentOS

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Debugging shared memory issues on CentOS
Дата
Msg-id CAHyXU0wv+ONa_ou+0bH92EcmX2smfRyh_ikN-vLvXe87p6GiWw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Debugging shared memory issues on CentOS  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Debugging shared memory issues on CentOS  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-performance
On Wed, Dec 11, 2013 at 9:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Mack Talcott <mack.talcott@gmail.com> writes:
>> The pattern I am seeing is that postgres processes keep growing in
>> shared (this makes sense as they access more of the shared memory, as
>> you've pointed out) but also process-specific memory as they run more
>> queries.  The largest ones are using around 300mb of process-specific
>> memory, even when they're idle and outside of any transactions.
>
> There's quite a lot of stuff that a PG process will cache in local memory
> once it's acquired the info, for example:
> - relcache (relation descriptors)
> - catcache (system catalog entries)
> - compiled trees for plpgsql functions
>
> 300mb worth of that stuff seems on the high side, but perhaps you have
> lots and lots of tables, or lots and lots of functions?

This has got to be the problem.  It's known that pathological
workloads (lots and lots of tables,views, and functions) abuse the
cache memory segment.  There's no cap to cache memory so over time it
will just accumulate entries until there's nothing left to cache.  For
most applications, this doesn't even show up on the radar.  However,
300mb per postgres backend will burn through that 8gb pretty quickly.
It's tempting to say, "there should be a limit to backend local cache"
but it's not clear if the extra tracking is really worth it all things
considered.  There was some discussion about this (see the archives).

Workarounds:
*) install connection pooler (as Tom noted), in particular pgbouncer.
 For workloads like this you will want to be spartan on the number of
physical connections -- say, 1 * number of cores.  For this option to
work you need to use transaction mode which in turn limits use of
session dependent features (advisory locks, NOTIFY, prepared
statements).  Also if your client stack is java you need to take some
extra steps.
*) add memory
*) force connections to recycle every X period of time

merlin


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Janek Sendrowski"
Дата:
Сообщение: Re: ORDER BY using index, tsearch2
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Debugging shared memory issues on CentOS