Re: Reducing overhead of frequent table locks

Поиск
Список
Период
Сортировка
От Noah Misch
Тема Re: Reducing overhead of frequent table locks
Дата
Msg-id 20110524153852.GC21833@tornado.gateway.2wire.net
обсуждение исходный текст
Ответ на Re: Reducing overhead of frequent table locks  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Reducing overhead of frequent table locks
Список pgsql-hackers
On Tue, May 24, 2011 at 10:35:23AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 10:03 AM, Noah Misch <noah@leadboat.com> wrote:
> > Let's see if I understand the risk better now: the new system will handle lock
> > load better, but when it does hit a limit, understanding why that happened
> > will be more difficult. ?Good point. ?No silver-bullet ideas come to mind for
> > avoiding that.
> 
> The only idea I can think of is to try to impose some bounds.  For
> example, suppose we track the total number of locks that the system
> can handle in the shared hash table.  We try to maintain the system in
> a state where the number of locks that actually exist is less than
> that number, even though some of them may be stored elsewhere.  You
> can imagine a system where backends grab a global mutex to reserve a
> certain number of slots, and store that in shared memory together with
> their fast-path list, but another backend which is desperate for space
> can go through and trim back reservations to actual usage.

Forcing artificial resource exhaustion is a high price to pay.  I suppose it's
quite like disabling Linux memory overcommit, and some folks would like it.

> Another random idea for optimization: we could have a lock-free array
> with one entry per backend, indicating whether any fast-path locks are
> present.  Before acquiring its first fast-path lock, a backend writes
> a 1 into that array and inserts a store fence.  After releasing its
> last fast-path lock, it performs a store fence and writes a 0 into the
> array.  Anyone who needs to grovel through all the per-backend
> fast-path arrays for whatever reason can perform a load fence and then
> scan the array.  If I understand how this stuff works (and it's very
> possible that I don't), when the scanning backend sees a 0, it can be
> assured that the target backend has no fast-path locks and therefore
> doesn't need to acquire and release that LWLock or scan that fast-path
> array for entries.

I'm probably just missing something, but can't that conclusion become obsolete
arbitrarily quickly?  What if the scanning backend sees a 0, and the subject
backend is currently sleeping just before it would have bumped that value?  We
need to take the LWLock is there's any chance that the subject backend has not
yet seen the scanning backend's strong_lock_counts[] update.

nm


В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Fetter
Дата:
Сообщение: Re: 9.2 schedule
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Pull up aggregate subquery