Обсуждение: Minor fix in lwlock.c

Поиск
Список
Период
Сортировка

Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
The chance that num_held_lwlocks is beyond MAX_SIMUL_LWLOCKS is similar to
the chance that failed to grasp a spinlock in 1 minute, so they should be
treated in the same way. This is mainly to prevent programming error (e.g.,
forget to release the LWLocks).

Regards,
Qingqing

---

Index: lwlock.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/lwlock.c,v
retrieving revision 1.25
diff -c -r1.25 lwlock.c
*** lwlock.c    31 Dec 2004 22:01:05 -0000      1.25
--- lwlock.c    8 Apr 2005 02:19:31 -0000
***************
*** 328,334 ****
        SpinLockRelease_NoHoldoff(&lock->mutex);

        /* Add lock to list of locks held by this backend */
!       Assert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);
        held_lwlocks[num_held_lwlocks++] = lockid;

        /*
--- 328,335 ----
        SpinLockRelease_NoHoldoff(&lock->mutex);

        /* Add lock to list of locks held by this backend */
!       if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
!               elog(FATAL, "Too many LWLocks");
        held_lwlocks[num_held_lwlocks++] = lockid;

        /*
***************
*** 397,403 ****
        else
        {
                /* Add lock to list of locks held by this backend */
!               Assert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);
                held_lwlocks[num_held_lwlocks++] = lockid;
        }

--- 398,405 ----
        else
        {
                /* Add lock to list of locks held by this backend */
!               if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
!                       elog(FATAL, "Too many LWLocks");
                held_lwlocks[num_held_lwlocks++] = lockid;
        }



Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> The chance that num_held_lwlocks is beyond MAX_SIMUL_LWLOCKS is similar to
> the chance that failed to grasp a spinlock in 1 minute, so they should be
> treated in the same way. This is mainly to prevent programming error (e.g.,
> forget to release the LWLocks).

Hmm ... yeah, it's not too hard to imagine a bug leading to trying to
grab content locks on more than 100 buffers, for example.  Patch
applied, although I reduced the severity from FATAL to ERROR.  I don't
see any reason to think we'd be unable to recover normally from such a
bug --- do you?

            regards, tom lane

Re: Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
> I don't see any reason to think we'd be unable to recover normally from
such a
> bug --- do you?
>

I guess the problem is here:

 /*
  * Fix the process wait semaphore's count for any absorbed wakeups.
  */
 while (extraWaits-- > 0)
  PGSemaphoreUnlock(&proc->sem);

elog(ERROR) won't recover semaphore count.




Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> I guess the problem is here:

>  /*
>   * Fix the process wait semaphore's count for any absorbed wakeups.
>   */
>  while (extraWaits-- > 0)
>   PGSemaphoreUnlock(&proc->sem);

Mmm.  Could be a problem, but the chances of having extraWaits>0 is
really pretty small.  In any case, FATAL doesn't fix this, because
it will still try to go through normal backend exit cleanup which
requires having working LWLock support.  If you take the above risk
seriously then you need a PANIC error.

The alternative would be to move the Unlock loop in front of the
addition of the LWLock to held_lwlocks[], but I think that cure
is probably worse than the disease --- the chance of an error during
Unlock seems nonzero.

            regards, tom lane

Re: Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes>
> The alternative would be to move the Unlock loop in front of the
> addition of the LWLock to held_lwlocks[], but I think that cure
> is probably worse than the disease --- the chance of an error during
> Unlock seems nonzero.
>

Another alternative might use PG_TRY/PG_CATCH to make sure that the
semaphore is released. But seems this costs too much ...

Regards,
Qingqing



Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> Another alternative might use PG_TRY/PG_CATCH to make sure that the
> semaphore is released. But seems this costs too much ...

I agree.  LWLockAcquire is a hot-spot already.

Maybe we *should* make it a PANIC.  Thoughts?

            regards, tom lane

Re: Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
>
> Maybe we *should* make it a PANIC.  Thoughts?
>

Reasonable. Since this should *never* happen. Once happened, that's means we
have a serious bug in our design/coding.

Regards,
Qingqing



Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> "Tom Lane" <tgl@sss.pgh.pa.us> writes
>> Maybe we *should* make it a PANIC.  Thoughts?

> Reasonable. Since this should *never* happen. Once happened, that's means we
> have a serious bug in our design/coding.

Plan C would be something like

    if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
    {
        release the acquired lock;
        elog(ERROR, "too many LWLocks taken");
    }

But we couldn't just call LWLockRelease, since it expects the lock to
be recorded in held_lwlocks[].  We'd have to duplicate a lot of code,
or split LWLockRelease into multiple routines, neither of which seem
attractive answers considering that this must be a can't-happen
case anyway.

PANIC it will be, unless someone thinks of a reason why not by
tomorrow...

            regards, tom lane

Re: Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
> Plan C would be something like
>
> if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
> {
> release the acquired lock;
> elog(ERROR, "too many LWLocks taken");
> }
>
> But we couldn't just call LWLockRelease, since it expects the lock to
> be recorded in held_lwlocks[].  We'd have to duplicate a lot of code,
> or split LWLockRelease into multiple routines, neither of which seem
> attractive answers considering that this must be a can't-happen
> case anyway.

We can reserve some LWLocks for elog(FATAL) since the shmem_exit() would
need it (Seems elog(ERROR) does not need it). So even if ERROR is upgraded
to FATAL in some cases (e.g., PGSemaphoreUnlock() fails), we could still
exit gracefully. The code will be like this:

---
/* Unlock semaphores first */
while (extraWaits-- > 0)
    PGSemaphoreUnlock(&proc->sem);

/* Add the lock into my list then.
 * If a process is in exiting status, it could use the reserved lwlocks
 */
reserved = proc_exit_inprogress? 0 : NUM_RESERVED_LWLOCKS;
if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS - reserved)
    elog(ERROR, "too many LWLocks taken");
held_lwlocks[num_held_lwlocks++] = lockid;
---

Since this is a should-not-happen case, so the fix could be reserved for
tomorrow when we need PG to grasp more LWLocks than now.

Regards,
Qingqing



Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> /* Unlock semaphores first */
> while (extraWaits-- > 0)
>     PGSemaphoreUnlock(&proc->sem);

> /* Add the lock into my list then.
>  * If a process is in exiting status, it could use the reserved lwlocks
>  */
> reserved = proc_exit_inprogress? 0 : NUM_RESERVED_LWLOCKS;
> if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS - reserved)
>     elog(ERROR, "too many LWLocks taken");
> held_lwlocks[num_held_lwlocks++] = lockid;

But if the MAX_SIMUL_LWLOCKS - NUM_RESERVED_LWLOCKS limit is reached,
you elog without having recorded the lock you just took ... which is a
certain loser since nothing will ever release it.  Also,
proc_exit_inprogress is not the appropriate thing to test for unless
you're going to use an elog(FATAL).

I think it would work to record the lock, unwind the extraWaits, and
*then* elog if we're above the allowable limit.  Something like

 if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
     elog(PANIC, "too many LWLocks taken");
 held_lwlocks[num_held_lwlocks++] = lockid;

 while (extraWaits-- > 0)
     PGSemaphoreUnlock(&proc->sem);

 if (!InError && num_held_lwlocks >= MAX_SIMUL_LWLOCKS - NUM_RESERVED_LWLOCKS)
     elog(ERROR, "too many LWLocks taken");

except we don't have the InError flag anymore so there'd need to be some
other test for deciding whether it should be OK to go into the reserved
locks.

But I think this is too much complexity for a case that shouldn't ever
happen.

            regards, tom lane

Re: Minor fix in lwlock.c

От
Tom Lane
Дата:
Actually, on further thought, there's a really simple solution that
we've used elsewhere: make sure you have the resource you need *before*
you get into the critical section of code.  I've applied the attached
revised patch.

            regards, tom lane

*** src/backend/storage/lmgr/lwlock.c.orig    Fri Dec 31 17:46:10 2004
--- src/backend/storage/lmgr/lwlock.c    Fri Apr  8 10:14:04 2005
***************
*** 213,218 ****
--- 213,222 ----
       */
      Assert(!(proc == NULL && IsUnderPostmaster));

+     /* Ensure we will have room to remember the lock */
+     if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
+         elog(ERROR, "too many LWLocks taken");
+
      /*
       * Lock out cancel/die interrupts until we exit the code section
       * protected by the LWLock.  This ensures that interrupts will not
***************
*** 328,334 ****
      SpinLockRelease_NoHoldoff(&lock->mutex);

      /* Add lock to list of locks held by this backend */
-     Assert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);
      held_lwlocks[num_held_lwlocks++] = lockid;

      /*
--- 332,337 ----
***************
*** 353,358 ****
--- 356,365 ----

      PRINT_LWDEBUG("LWLockConditionalAcquire", lockid, lock);

+     /* Ensure we will have room to remember the lock */
+     if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS)
+         elog(ERROR, "too many LWLocks taken");
+
      /*
       * Lock out cancel/die interrupts until we exit the code section
       * protected by the LWLock.  This ensures that interrupts will not
***************
*** 397,403 ****
      else
      {
          /* Add lock to list of locks held by this backend */
-         Assert(num_held_lwlocks < MAX_SIMUL_LWLOCKS);
          held_lwlocks[num_held_lwlocks++] = lockid;
      }

--- 404,409 ----

Re: Minor fix in lwlock.c

От
"Qingqing Zhou"
Дата:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
> Actually, on further thought, there's a really simple solution that
> we've used elsewhere: make sure you have the resource you need *before*
> you get into the critical section of code.  I've applied the attached
> revised patch.
>

Oh, that's the one - why didn't think of it at the first time? :-)

Regards,
Qingqing