Обсуждение: Re: [PATCHES] update i386 spinlock for hyperthreading

Поиск
Список
Период
Сортировка

Re: [PATCHES] update i386 spinlock for hyperthreading

От
"Simon Riggs"
Дата:
>Kenneth Marshall would like me to post this:
> I agree that in order to manage today's large memory machines, we
> need to have less contention in our buffer management strategies.
> The two main main choke points are in the buffer hash table routines
> and in the buffer management linked lists. Unfortunately most of the
> code depends on holding the bufmgr lock on entry which eliminates
> many chances for parallelism.
> 
> The number of buffer pools should at the very minimum be equal to
> the number of processors in the system. This can allow us to greatly
> reduce the number of cache-sync cycles if each processor has its
> own lock structures for T1-cpuN, T2-cpuN. Now when we allocate a new
> buffer, preferentially grab a buffer from the cpu specific queue
before
> looking in the other queues. Now we have already decreased the amount
> of contention by approximately (1/numberCPUs).
> 
> The next item to address is the buf_table concurrency. It appears that
> the same code that was used in the hash index update by Tom Lane could
> be used to split the buf_table accesses into a per-bucket access using
> a per-bucket lock and not a global lock. Modifying the current
dyn_hash
> search and update code would make it look effectively like Mr. Lane's
> new hash index code.
> 
> The final issue is the churn in the MRU/LRU positions on the buffer
> management lists. Currently, we always remove a buffer from the list
> (T1, T2,...) and then add it to the new list in the MRU position. On
> a busy system, for a given query mix a subset of the buffers will be
> very busy and compete for the MRU position. What we want to do is
> avoid moving a buffer near the top of the list for some definition
> of top. One idea, is to have a "per-CPU per-T* counter" which is
> incremented as buffers are added to the MRU position. The key is to
> store the counter value in the header. Now when we access the buffer
> in the list, if the counter is within a value (settable by a GUC)
> the buffer is not moved. This would reduce the MRU churn for the
> busy buffers near the top of the lists.
> 
> These ideas are very similar to your own speculations. I hope that
> their slightly different slant can contribute to this discussion.
> Thank you for your time.
> 
> Yours truly,
> Kenneth Marshall
> 



Re: [PATCHES] update i386 spinlock for hyperthreading

От
Neil Conway
Дата:
"Simon Riggs" <simon@2ndquadrant.com> writes:
>>Kenneth Marshall would like me to post this:
>> I agree that in order to manage today's large memory machines, we
>> need to have less contention in our buffer management strategies.
>> The two main main choke points are in the buffer hash table routines
>> and in the buffer management linked lists. Unfortunately most of the
>> code depends on holding the bufmgr lock on entry which eliminates
>> many chances for parallelism.

Are you familiar with the work I've been doing recently to try to
reduce the contention for the BufMgrLock? For example:

http://www.mail-archive.com/pgsql-hackers%40postgresql.org/msg40289.html

The approach I've taken is to remove the usage of the BufMgrLock for
operations that do not affect the global state of the buffer pool.
That means that operations like incrementing a buffer's refcount
requires only holding the per-buffer meta data lock. That's only one
part of the puzzle, however: other ways to reduce BufMgrLock
contention will probably be necessary.

Unfortunately this code is not in CVS yet: I've been too busy with
school to wrap up the remaining issues it has. However, I hope to get
it into the tree reasonably soon, and certainly in time for 7.5.

>> The number of buffer pools should at the very minimum be equal to
>> the number of processors in the system. [...]

Not sure I understand exactly what you're suggesting here. Can you
elaborate?

>> The next item to address is the buf_table concurrency. It appears
>> that the same code that was used in the hash index update by Tom
>> Lane could be used to split the buf_table accesses into a
>> per-bucket access using a per-bucket lock and not a global
>> lock. Modifying the current dyn_hash search and update code would
>> make it look effectively like Mr. Lane's new hash index code.

Interesting. This would be complementary, of course, to my work on
changing the buffer locking scheme: perhaps once that is done, we can
reassess the remaining lock contention issues in the bufmgr, and
implement this if necessary?

Another related idea that Jan Wieck and I had discussed was avoiding
acquiring the BufMgrLock at all in BufferAlloc() where possible. For
instance, we could enhance the existing PrivateRefCount mechanism, or
invent some new mechanism, which would essentially keep a LRU list of
buffer tag -> buffer id mappings in each backend's local memory. Then,
we would walk this list in BufferAlloc(): if the buffer tag we're
looking for is already there, we can immediately acquire the buffer's
per-buffer meta data lock (without ever acquiring the BufMgrLock).
We'd need to then check that the buffer hasn't changed under our feet
(compare the locked buffer's tag with what we think its tag should be,
and start over if its different).

-Neil



Re: [PATCHES] update i386 spinlock for hyperthreading

От
Kenneth Marshall
Дата:
On Fri, Feb 20, 2004 at 05:26:46AM -0500, Neil Conway wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> >>Kenneth Marshall would like me to post this:
> >> I agree that in order to manage today's large memory machines, we
> >> need to have less contention in our buffer management strategies.
> >> The two main main choke points are in the buffer hash table routines
> >> and in the buffer management linked lists. Unfortunately most of the
> >> code depends on holding the bufmgr lock on entry which eliminates
> >> many chances for parallelism.
> 
> Are you familiar with the work I've been doing recently to try to
> reduce the contention for the BufMgrLock? For example:
> 
> http://www.mail-archive.com/pgsql-hackers%40postgresql.org/msg40289.html
> 
> The approach I've taken is to remove the usage of the BufMgrLock for
> operations that do not affect the global state of the buffer pool.
> That means that operations like incrementing a buffer's refcount
> requires only holding the per-buffer meta data lock. That's only one
> part of the puzzle, however: other ways to reduce BufMgrLock
> contention will probably be necessary.
> 
> Unfortunately this code is not in CVS yet: I've been too busy with
> school to wrap up the remaining issues it has. However, I hope to get
> it into the tree reasonably soon, and certainly in time for 7.5.
> 
> >> The number of buffer pools should at the very minimum be equal to
> >> the number of processors in the system. [...]
> 
> Not sure I understand exactly what you're suggesting here. Can you
> elaborate?
> 
> >> The next item to address is the buf_table concurrency. It appears
> >> that the same code that was used in the hash index update by Tom
> >> Lane could be used to split the buf_table accesses into a
> >> per-bucket access using a per-bucket lock and not a global
> >> lock. Modifying the current dyn_hash search and update code would
> >> make it look effectively like Mr. Lane's new hash index code.
> 
> Interesting. This would be complementary, of course, to my work on
> changing the buffer locking scheme: perhaps once that is done, we can
> reassess the remaining lock contention issues in the bufmgr, and
> implement this if necessary?
> 
> Another related idea that Jan Wieck and I had discussed was avoiding
> acquiring the BufMgrLock at all in BufferAlloc() where possible. For
> instance, we could enhance the existing PrivateRefCount mechanism, or
> invent some new mechanism, which would essentially keep a LRU list of
> buffer tag -> buffer id mappings in each backend's local memory. Then,
> we would walk this list in BufferAlloc(): if the buffer tag we're
> looking for is already there, we can immediately acquire the buffer's
> per-buffer meta data lock (without ever acquiring the BufMgrLock).
> We'd need to then check that the buffer hasn't changed under our feet
> (compare the locked buffer's tag with what we think its tag should be,
> and start over if its different).
> 
> -Neil
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend

Neil,

I have been following the discussion in the pgsql-hackers list. I tried
to apply the patch you mentioned above, but I did not have the same
version of postgres and had a lot of rejects. I also wanted to see your
approach to adding a finer-grained lock structure to the buffer manager;
since some of my ideas would depend on the implimentation used.

My comment on the number of buffer pools:

>> The number of buffer pools should at the very minimum be equal to
>> the number of processors in the system. [...]

refers to the fact that if you could provide a per-CPU buffer pool
you would be able to minimize the intra-CPU cache sync. The code
would need to be able to find out what CPU it was running on to make
that work. Other wise, simply splitting the buffer pool into several
pools with a per-pool lock would increase the concurrency proportional
to the number of pools. The buffer header would have a pool id to
allow you to grab the appropriate per-pool lock. Also preferentially
take a new buffer from the pool you are already using.

I am waiting for your commit to CVS to look at if further. If you
think that will be a while still, could you let me know which version
of postgres I can use to get a clean patch installation from 

http://www.mail-archive.com/pgsql-hackers%40postgresql.org/msg40289.html

--Ken