Обсуждение: Re: [GENERAL] Not your father's question about deadlocks

Поиск
Список
Период
Сортировка

Re: [GENERAL] Not your father's question about deadlocks

От
Tom Lane
Дата:
Clarence Gardner <clarence@silcom.com> writes:
> That scenario seems quite simple, but I can't reproduce the deadlock with
> this seemingly-identical sequence.

This is a bug in 8.1 and up.  The reason you couldn't reproduce it is
that it requires a minimum of three transactions involved, two of which
concurrently grab ShareLock on a tuple --- resulting in a MultiXact
being created to represent the concurrent lock holder.  The third xact
then comes along and tries to update the same tuple, so it naturally
blocks waiting for the existing ShareLocks to go away.  Then one of the
original xacts tries to grab share lock again.  It should fall through
because it "already has" the lock, but it fails to recognize this and
queues up behind the exclusive locker ... deadlock!

Reproducer:

Session 1:
create table foo (f1 int primary key, f2 text);
insert into foo values(1, 'z');
create table bar (f1 int references foo);
begin;
insert into bar values(1);

Session 2:
begin;
insert into bar values(1);

Session 3:
update foo set f2='q';

Back to session 1:
insert into bar values(1);
ERROR:  deadlock detected

Note that session 2 might actually have exited before the deadlock
occurs.

I think the problem is that HeapTupleSatisfiesUpdate() always returns
HeapTupleBeingUpdated when XMAX is a running MultiXact, even if the
MultiXact includes our own transaction.  This seems correct for the
usages in heap_update and heap_delete --- we have to wait for the
multixact's other members to terminate.  But in heap_lock_tuple
we need a special case when we are already a member of the MultiXact:
fall through without trying to reacquire the tuple lock.

Comments?  Should we change HeapTupleSatisfiesUpdate's API to
distinguish this case, or is it better to have a localized change
in heap_lock_tuple?

            regards, tom lane

Re: [GENERAL] Not your father's question about deadlocks

От
"Gurjeet Singh"
Дата:
On 11/17/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
we need a special case when we are already a member of the MultiXact:
fall through without trying to reacquire the tuple lock.

Small implementation detail: Also keep a count of how many times the same session requested the same lock, and do not release the lock until he requests same number of releases.

This might add (may be significant) overhead, but I am concerned with whether it is desirable?

Comments?  Should we change HeapTupleSatisfiesUpdate's API to
distinguish this case, or is it better to have a localized change
in heap_lock_tuple?



--
gurjeet[.singh]@EnterpriseDB.com
singh.gurjeet@{ gmail | hotmail | yahoo }.com

Re: [GENERAL] Not your father's question about deadlocks

От
Tom Lane
Дата:
"Gurjeet Singh" <singh.gurjeet@gmail.com> writes:
> On 11/17/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> we need a special case when we are already a member of the MultiXact:
>> fall through without trying to reacquire the tuple lock.

> Small implementation detail: Also keep a count of how many times the same
> session requested the same lock, and do not release the lock until he
> requests same number of releases.

No need for that, because there isn't any heap_unlock_tuple.

            regards, tom lane

Re: [GENERAL] Not your father's question about deadlocks

От
"Gurjeet Singh"
Дата:
On 11/17/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Gurjeet Singh" <singh.gurjeet@gmail.com> writes:
> Small implementation detail: Also keep a count of how many times the same
> session requested the same lock, and do not release the lock until he
> requests same number of releases.

No need for that, because there isn't any heap_unlock_tuple.

Cool... I didn't know we could get away with that in PG land!!

I assume unlocking is done by a COMMIT/ROLLBACK.

--
gurjeet[.singh]@EnterpriseDB.com
singh.gurjeet@{ gmail | hotmail | yahoo }.com