Re: Configurable FP_LOCK_SLOTS_PER_BACKEND

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: Configurable FP_LOCK_SLOTS_PER_BACKEND
Дата
Msg-id 20230807220514.m7imkysspjnowjyr@awork3.anarazel.de
обсуждение исходный текст
Ответ на Re: Configurable FP_LOCK_SLOTS_PER_BACKEND  (Andres Freund <andres@anarazel.de>)
Ответы Re: Configurable FP_LOCK_SLOTS_PER_BACKEND  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Hi,

On 2023-08-07 14:36:48 -0700, Andres Freund wrote:
> What if fast path locks entered PROCLOCK into the shared hashtable, just like
> with normal locks, the first time a lock is acquired by a backend. Except that
> we'd set a flag indicating the lock is a fastpath lock. When the lock is
> released, neither the LOCALLOCK nor the PROCLOCK entry would be
> removed. Instead, the LOCK/PROCLOCK would be modified to indicate that the
> lock is not held anymore.
> 
> That itself wouldn't buy us much - we'd still need to do a lookup in the
> shared hashtable. But, by the time we decide whether to use fast path locks,
> we've already done a hash lookup in the LOCALLOCK hashtable. Because the
> PROCLOCK entry would continue to exist, we can use LOCALLOCK->proclock to get
> the PROCLOCK entry without a shared hash table lookup.
> 
> Acquiring a strong lock on a fastpath lock would basically entail modifying
> all the relevant PROCLOCKs atomically to indicate that fast path locks aren't
> possible anymore.  Acquiring a fast path lock would just require atomically
> modifying the PROCLOCK to indicate that the lock is held.
> 
> On a first blush, this sounds like it could end up being fairly clean and
> generic?

On 2023-08-07 13:05:32 -0400, Robert Haas wrote:
> Of course, another thing we could do is try to improve the main lock
> manager somehow. I confess that I don't have a great idea for that at
> the moment, but the current locking scheme there is from a very, very
> long time ago and clearly wasn't designed with modern hardware in
> mind.

I think the biggest flaw of the locking scheme is that the LockHash locks
protect two, somewhat independent, things:
1) the set of currently lockable objects, i.e. the entries in the hash table [partition]
2) the state of all the locks [in a partition]

It'd not be that hard to avoid the shared hashtable lookup in a number of
cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest
above.  But we can't, in general, avoid the lock on the partition anyway, as
the each lock's state is also protected by the partition lock.

The amount of work to do a lookup in the shared hashtable and/or create a new
entry therein, is quite bound.  But the work for acquiring a lock is much less
so. We'll e.g. often have to iterate over the set of lock holders etc.

I think we ought to investigate whether pushing down the locking for the "lock
state" into the individual locks is worth it. That way the partitioned lock
would just protect the hashtable.

The biggest issue I see is deadlock checking. Right now acquiring all lock
partitions gives you a consistent view of all the non-fastpath locks - and
fastpath locks can't participate in deadlocks. Any scheme that makes "lock
state" locking in general more granular, will make it next to impossible to
have a similarly consistent view of all locks.  I'm not sure the current
degree of consistency is required however - the lockers participating in a
lock cycle, pretty much by definition, are blocked.


A secondary issue is that making the locks more granular could affect the
happy path measurably - we'd need two atomics for each heavyweight lock
acquisition, not one.  But if we cached the lookup in the shared hashtable,
we'd commonly be able to skip the hashtable lookup...

Greetings,

Andres Freund



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Nathan Bossart
Дата:
Сообщение: Re: Using defines for protocol characters
Следующее
От: Peter Geoghegan
Дата:
Сообщение: Re: Use of additional index columns in rows filtering