Idle In Transaction

Поиск
Список
Период
Сортировка
От Anthony Presley
Тема Idle In Transaction
Дата
Msg-id 1279050805.2594.23.camel@speedy.resolution.com
обсуждение исходный текст
Ответы Re: Idle In Transaction  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Idle In Transaction  (hubert depesz lubaczewski <depesz@depesz.com>)
Список pgsql-general
Hi all,

I'm bordering on insanity, trying to track down an IDLE in transaction
problem.

This started a few weeks ago, and we are using a Java application,
running Spring 2.0, Hibernate 3.2 (with L2 cache), Postgres JDBC
8.3-604.  We're also using pgBouncer (though, I've tried pgPool II and
gotten the same).

Every so often (usually in the early morning), we are seeing an "<IDLE>
in transaction" show up.  This appears to lock / block other statements
from going through, though I'm not sure why.  If left unchecked, we end
up with all of our connections being overrun.

For the last several days, I've woken up rather early and run:

SELECT datname, usename, procpid, client_addr, waiting, query_start,
current_query FROM pg_stat_activity order by query_start asc;

This shows me something like:

  datname  |  usename   | procpid | client_addr  | waiting |
query_start
|

                                                

current_query
                      

                     

-----------+------------+---------+--------------+---------+-------------------------------+----------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------
 scheduler | ressystems |    6632 | 192.168.1.10 | t       | 2010-07-09
05:56:51.309911-05 | update employee set min_hours_per_day=$1,
max_hours_per_day=$2, min_hours=$
3, max_hours=$4, min_pay=$5, max_pay=$6, max_shifts=$7, min_shifts=$8,
dateCreated=$9, user_id=$10, overtime=$11, notice_timeframe=$12,
hire_date=$13, termination_date=
$14, active_attendance_block=$15, internal_number=$16,
internal_password=$17, max_shifts_per_week=$18, min_shifts_per_week=$19,
birth_date=$20, ssn=$21, ssn_suffix=$22,
 payroll_id=$23, home_location_id=$24, home_department_id=$25,
parent_id=$26, deleted=$27, startTimestamp=$28, endTimestamp=$29 where
id=$30

 scheduler | ressystems |   11076 | 192.168.1.10 | f       | 2010-07-09
06:43:52.852328-05 | <IDLE> in transaction

In short, there are two queries.  The "update" has been running since
5:56am, and is waiting on the other backend to finish.

When looking at the locks table, I run:

select bl.pid as blocked_pid, a.usename as blocked_user,
    kl.pid as blocking_pid, ka.usename as blocking_user, a.current_query as
blocked_statement
 from pg_catalog.pg_locks bl
      join pg_catalog.pg_stat_activity a
      on bl.pid = a.procpid
      join pg_catalog.pg_locks kl
           join pg_catalog.pg_stat_activity ka
           on kl.pid = ka.procpid
      on bl.transactionid = kl.transactionid and bl.pid != kl.pid
 where not bl.granted;

blocked_pid | blocked_user | blocking_pid | blocking_user
|

blocked_
statement
                                                


-------------+--------------+--------------+---------------+------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
        6632 | ressystems   |        11076 | ressystems    | update
employee set min_hours_per_day=$1, max_hours_per_day=$2, min_hours=$3,
max_hours=$4, min_pay=$5, max
_pay=$6, max_shifts=$7, min_shifts=$8, dateCreated=$9, user_id=$10,
overtime=$11, notice_timeframe=$12, hire_date=$13, termination_date=$14,
active_attendance_block=$15
, internal_number=$16, internal_password=$17, max_shifts_per_week=$18,
min_shifts_per_week=$19, birth_date=$20, ssn=$21, ssn_suffix=$22,
payroll_id=$23, home_location_i
d=$24, home_department_id=$25, parent_id=$26, deleted=$27,
startTimestamp=$28, endTimestamp=$29 where id=$30
(1 row)

Which tells me that 6632 (an update command) is blocked because of
11076, an "IDLE in transaction" command, which I gather has been sitting
there, IDLE, for almost an hour?

Would a connection pooler cause (or amplify) any issues relating to
this?  How can I track down the issue here?  I've been looking through
web app and database logs without much luck.

Any way to have PostgreSQL log when a transaction takes to long in IDLE,
and perhaps what the last few transactions were?  (I've manually done
this, though we are only logging statements taking longer than 5 ms, and
there doesn't appear to be any consistency.)

Thanks!


--
Anthony


В списке pgsql-general по дате отправления:

Предыдущее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Atul.Goel@globaldatapoint.com
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: Testing 9.0beta3 and pg_upgrade