Killing off old postgres processes in a friendly way?

Поиск
Список
Период
Сортировка
От Rainer Mager
Тема Killing off old postgres processes in a friendly way?
Дата
Msg-id NEBBJBCAFMMNIHGDLFKGEECBCBAA.rmager@vgkk.co.jp
обсуждение исходный текст
Ответ на Backend closed the channel unexpectedly  ("Marco A. Bravo" <marco@ife.org.mx>)
Список pgsql-admin
Hi all,

    I believe something like this question has been asked here before but I
don't remember seeing an answer.

    Briefly, the problem we are having is that we sometimes open connections
(JDBC) to our database and then do not properly close them. The odd thing is
that postgres itself does not EVER seem to time them out and close them.
We've had processes over 2 weeks old that just sat there doing nothing.
Finally we restarted postgres to fix the problem.

    So, is there a setting to postgres (postmaster) so that it will timeout
old, unused connections?





    In more detail:

    We have a Java application that uses JDBC to connect to a postgres
database. The app uses a connection pool to improve performance. When the
app starts up it creates some number of connections for this pool (e.g.,
10). During our development process, we are offing debugging/killing the app
in mid-run. This means that it dies immediately without ever properly
closing the connections.
    The result of this is that the process exist on the postgres sever machine
until we restart postmaster. It appears that this is not a problem in our
production system because the debugging killing of the app does not occur.
However, we would like to find a setting to postgres so that it proactively
cleans up old connections.
    How can this be done?


--Rainer


В списке pgsql-admin по дате отправления:

Предыдущее
От: "Marco A. Bravo"
Дата:
Сообщение: Backend closed the channel unexpectedly
Следующее
От: Jeremy Hylton
Дата:
Сообщение: pg_dump fails with memory exhausted error