Re: Spinlocks, yet again: analysis and proposed patches

Поиск
Список
Период
Сортировка
От Min Xu (Hsu)
Тема Re: Spinlocks, yet again: analysis and proposed patches
Дата
Msg-id 20050913233155.GJ5161@cs.wisc.edu
обсуждение исходный текст
Ответ на Re: Spinlocks, yet again: analysis and proposed patches  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Spinlocks, yet again: analysis and proposed patches  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Tue, 13 Sep 2005 Tom Lane wrote :
> I wrote:
> > We could ameliorate this if there were a way to acquire ownership of the
> > cache line without necessarily winning the spinlock.
> 
> Another thought came to mind: maybe the current data layout for LWLocks
> is bad.  Right now, the spinlock that protects each LWLock data struct
> is itself part of the struct, and since the structs aren't large (circa
> 20 bytes), the whole thing is usually all in the same cache line.  This
> had seemed like a good idea at the time, on the theory that once you'd
> obtained the spinlock you'd also have pulled in the LWLock contents.
> But that's only optimal under an assumption of low contention.  If
> there's high contention for the spinlock, then another processor
> spinning on the lock will be continuously taking away ownership of the
> cache line and thus slowing down the guy who's got the lock and is
> trying to examine/update the LWLock contents.

If this were the case, perhaps first fetch the spin lock with read-only
permission should have helped. Modern processors have store buffers to
not let a store miss to slow down the processor. Therefore, if processor
A has the spin lock and it is examining and updating the LWLock on the
the same cache line, as long as processor B doesn't try to write the
same cache line, processor A won't be slowed down. What would happen
is that multiple writing are coalescing in the write buffer and they
are going to be flushed to the memory when processor A regain the write
permission.

I still think your first scenario is possible. That's when final
processor B gets the lock, its time slice runs out. This sounds to
me is likely to cause a chain of context switches.

> Maybe it'd be better to allocate the spinlocks off by themselves.
> Then, spinning processors would not affect the processor that's updating
> the LWLock; only when it finishes doing that and needs to clear the
> spinlock will it have to contend with the spinners for the cache line
> containing the spinlock.
> 
> This would add an instruction or so to LWLockAcquire and LWLockRelease,
> and would be of no benefit on uniprocessors, but it might be worth doing
> for multiprocessors.  Another patch to test ...
> 
> I'm starting to think that we might have to succumb to having a compile
> option "optimize for multiprocessor" or "optimize for single processor".
> It's pretty hard to see how we'd alter a data structure decision like
> this on the fly.
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
> 
>                http://archives.postgresql.org


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Spinlocks, yet again: analysis and proposed patches
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: Spinlocks, yet again: analysis and proposed patches