BUG #12183: Memory leak in long running sessions

Поиск
Список
Период
Сортировка
От valgog@gmail.com
Тема BUG #12183: Memory leak in long running sessions
Дата
Msg-id 20141208184618.2526.62775@wrigleys.postgresql.org
обсуждение исходный текст
Ответы Re: BUG #12183: Memory leak in long running sessions
Список pgsql-bugs
The following bug has been logged on the website:

Bug reference:      12183
Logged by:          Valentine Gogichashvili
Email address:      valgog@gmail.com
PostgreSQL version: 9.3.4
Operating system:   Ubuntu Precise (3.13.0-30)
Description:

Hello,

it will be difficult to debug, but I will start with the report and provide
more and more information as requested.

The situation is follows:

Postgres 9.3.4 is running on a physical machine with 125GB of RAM.

Java application opens connections using a connection pool and reuses them
until it is redeployed. So, depending on the deployment cycle connections
can grow up to 2 weeks old.

Java application is calling a lot of stored procedure calls including ones
with SQL statements, that consume a lot of memory.

We experience a situations, that some of the sessions (in our case the
oldest ones) do not give the memory back.

The "solution" in our case is to kill the oldest idle connections. But of
cause this is not a nice solution.

The excerpt of the "ps fauxw" is the following:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND




postgres   959  0.0  0.0 8736052 99316 ?       S    Jun22 111:57
/server/postgres/9.3.4/bin/postgres -D /data/postgres/pgsql_whdbef/9.3/data
postgres   960  0.0  0.0 102544  7160 ?        Ss   Jun22 113:32  \_
postgres: logger process
postgres 44057  0.3  6.4 8744884 8531256 ?     Ss   Sep10 425:46  \_
postgres: checkpointer process
postgres 44058  0.2  6.4 8740288 8504052 ?     Ss   Sep10 267:35  \_
postgres: writer process
postgres 44059  0.6  0.0 8740152 18648 ?       Ss   Sep10 870:12  \_
postgres: wal writer process
postgres 44060  0.0  0.0 8741060 4328 ?        Ss   Sep10   8:44  \_
postgres: autovacuum launcher process
postgres 44061  0.0  0.0  98512  1904 ?        Ss   Sep10  16:14  \_
postgres: archiver process   last was 000000010000386C000000E1
postgres 44062  1.3  0.0 100304  3020 ?        Ss   Sep10 1804:45  \_
postgres: stats collector process
postgres 11544  1.0  0.0 8741376 3380 ?        Ss   Nov04 518:15  \_
postgres: wal sender process standby 10.228.2.29(56620) streaming
386C/E2FEEB30
postgres 41851  1.0  0.0 8741376 3024 ?        Ss   Nov04 523:59  \_
postgres: wal sender process standby 10.228.2.34(35268) streaming
386C/E2FEEB30
postgres 39234  1.8  6.7 9059376 8884356 ?     Ss   Dec03 138:16  \_
postgres: zomcat_p4820 prod_wh_ef_main_db 10.228.2.41(39735) idle
postgres 54021  5.3  7.5 10397052 9927416 ?    Ss   08:57  33:45  \_
postgres: zomcat_p4820 prod_wh_ef_main_db 10.228.2.40(57772) idle
postgres 31659  0.1  0.2 8766116 278368 ?      Ss   19:15   0:01  \_
postgres: zomcat_p4810 prod_wh_ef_main_db 10.228.2.41(46032) idle
... more processes

The result of the "select * from pg_stat_activity order by backend_start
limit 5" is the following:

-[ RECORD 1
]----+----------------------------------------------------------
datid            | 16414
datname          | prod_wh_ef_main_db
pid              | 39234
usesysid         | 18361
usename          | zomcat_p4820
application_name |
client_addr      | 10.228.2.41
client_hostname  |
client_port      | 39735
backend_start    | 2014-12-03 13:01:44.983123+01
xact_start       |
query_start      | 2014-12-08 19:28:46.091008+01
state_change     | 2014-12-08 19:28:46.091531+01
waiting          | f
state            | idle
query            | select * from very_fast_sproc($1) as result
-[ RECORD 2
]----+----------------------------------------------------------
datid            | 16414
datname          | prod_wh_ef_main_db
pid              | 54021
usesysid         | 18361
usename          | zomcat_p4820
application_name |
client_addr      | 10.228.2.40
client_hostname  |
client_port      | 57772
backend_start    | 2014-12-08 08:57:05.976879+01
xact_start       |
query_start      | 2014-12-08 19:28:46.135191+01
state_change     | 2014-12-08 19:28:46.213875+01
waiting          | f
state            | idle
query            | SELECT * FROM fast_sproc ( $1,$2,$3,$4,$5,$6,$7,$8,$9 )
-[ RECORD 3
]----+----------------------------------------------------------
datid            | 16414
datname          | prod_wh_ef_main_db
pid              | 31659
usesysid         | 25509794
usename          | zomcat_p4810
application_name |
client_addr      | 10.228.2.41
client_hostname  |
client_port      | 46032
backend_start    | 2014-12-08 19:15:51.126914+01
xact_start       |
query_start      | 2014-12-08 19:28:29.45225+01
state_change     | 2014-12-08 19:28:29.452271+01
waiting          | f
state            | idle
query            | SELECT pg_advisory_unlock(42)
-[ RECORD 4
]----+----------------------------------------------------------
datid            | 16414
datname          | prod_wh_ef_main_db
pid              | 31662
usesysid         | 25509794
usename          | zomcat_p4810
application_name |
client_addr      | 10.228.2.41
client_hostname  |
client_port      | 46034
backend_start    | 2014-12-08 19:15:51.210825+01
xact_start       |
query_start      | 2014-12-08 19:28:44.452563+01
state_change     | 2014-12-08 19:28:44.452578+01
waiting          | f
state            | idle
query            | SELECT pg_advisory_unlock(42)


Do anybody has any ideas?

Regards,

-- Valentine

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Incorrect return code for error on call of commit during 2PC
Следующее
От: Tom Lane
Дата:
Сообщение: Re: BUG #12183: Memory leak in long running sessions