Re: BUG #8470: 9.3 locking/subtransaction performance regression

Поиск
Список
Период
Сортировка
От Alvaro Herrera
Тема Re: BUG #8470: 9.3 locking/subtransaction performance regression
Дата
Msg-id 20131231163418.GW22570@eldon.alvh.no-ip.org
обсуждение исходный текст
Ответ на Re: BUG #8470: 9.3 locking/subtransaction performance regression  (Andres Freund <andres@2ndquadrant.com>)
Ответы Re: BUG #8470: 9.3 locking/subtransaction performance regression  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Список pgsql-bugs
Andres Freund wrote:
> On 2013-12-23 16:53:00 -0300, Alvaro Herrera wrote:

> > +              * This must be done before acquiring our tuple lock, to avoid
> > +              * deadlocks with other transaction that are already waiting on the
> > +              * lock we hold.
> > +              */
> > +             if (!(tp.t_data->t_infomask & HEAP_XMAX_IS_MULTI) &&
> > +                     TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmax(tp.t_data)))
> > +             {
> > +                     Assert(HEAP_XMAX_IS_LOCKED_ONLY(tp.t_data->t_infomask));
> > +
> > +                     goto continue_deletion;
> > +             }
>
> Man, the control flow in heapam.c isn't getting simpler :/. Could you
> rename the label to perform_deletion or such? It's no really continuing
> it from my POV.

I considered the idea of putting the rest of that block inside a new
"else" block.  That would have the same effect.  From a control flow POV
that might be simpler, not sure.  Any opinions on that?  I renamed the
labels for now.

> >      /*
> > -     * We might already hold the desired lock (or stronger), possibly under a
> > -     * different subtransaction of the current top transaction.  If so, there
> > -     * is no need to change state or issue a WAL record.  We already handled
> > -     * the case where this is true for xmax being a MultiXactId, so now check
> > -     * for cases where it is a plain TransactionId.
> > -     *
> > -     * Note in particular that this covers the case where we already hold
> > -     * exclusive lock on the tuple and the caller only wants key share or
> > -     * share lock. It would certainly not do to give up the exclusive lock.
> > -     */
> > -    if (!(old_infomask & (HEAP_XMAX_INVALID |
> > -                          HEAP_XMAX_COMMITTED |
> > -                          HEAP_XMAX_IS_MULTI)) &&
> > -        (mode == LockTupleKeyShare ?
> > -         (HEAP_XMAX_IS_KEYSHR_LOCKED(old_infomask) ||
> > -          HEAP_XMAX_IS_SHR_LOCKED(old_infomask) ||
> > -          HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) :
> > -         mode == LockTupleShare ?
> > -         (HEAP_XMAX_IS_SHR_LOCKED(old_infomask) ||
> > -          HEAP_XMAX_IS_EXCL_LOCKED(old_infomask)) :
> > -         (HEAP_XMAX_IS_EXCL_LOCKED(old_infomask))) &&
> > -        TransactionIdIsCurrentTransactionId(xmax))
> > -    {
> > -        LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
> > -        /* Probably can't hold tuple lock here, but may as well check */
> > -        if (have_tuple_lock)
> > -            UnlockTupleTuplock(relation, tid, mode);
> > -        return HeapTupleMayBeUpdated;
> > -    }
>
> Are you sure transforming this hunk the way you did is functionally
> equivalent?

It's not, actually -- with the new code, we will go to all the trouble
of setting OldestMulti, marking the buffer dirty and doing some
WAL-logging, whereas the old code wouldn't do any of that.  But the new
code handles more cases and is easier to understand (at least IMV).

I considered the idea of checking whether the new Xmax we would be
setting on the tuple is the same as the Xmax the tuple already had, and
skip doing those things if so.  But it didn't seem enough of a win to me
to warrant the complexity of checking (I think we'd need to check the
lock/update bits also, not just the xmax value itself.)

> >          if (!MultiXactIdIsRunning(xmax))
> >          {
> > -            if (HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) ||
> > -                TransactionIdDidAbort(MultiXactIdGetUpdateXid(xmax,
> > -                                                              old_infomask)))
> > +            if ((HEAP_XMAX_IS_LOCKED_ONLY(old_infomask) ||
> > +                 !TransactionIdDidCommit(MultiXactIdGetUpdateXid(xmax,
> > +                                                                 old_infomask))))
>
> Heh, that's actually a (independent) bug...

I'm not sure it's a bug fix ... this is just additionally considering
optimizing (avoiding a multi) in the case that a transaction has crashed
without aborting; I don't think there would be any functional
difference.  We would end up spuriously calling MultiXactIdExpand, but
that routine already ignores crashed updaters so it would end up
creating a singleton multi.

This doesn't seem worth a separate commit, if that's what you're
thinking.  Do you disagree?

> What's the test coverage for these cases? It better be 110%...

110% is rather hard to achieve, particularly because of the cases
involving crashed transactions.  Other than that I think I tested all
those paths.  I will go through it again.


Thanks for the review; here's an updated version.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Вложения

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: BUG #8710: dblink dblink_get_pkey output bug, and dblink_build_sql_update bug
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: BUG #8470: 9.3 locking/subtransaction performance regression