Re: Reducing overhead of frequent table locks

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Reducing overhead of frequent table locks
Дата
Msg-id BANLkTikBeCRcOXRLsPfJ0kX5NpcMTsxJdw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Reducing overhead of frequent table locks  (Simon Riggs <simon@2ndQuadrant.com>)
Ответы Re: Reducing overhead of frequent table locks
Список pgsql-hackers
On Wed, May 25, 2011 at 8:27 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> I got a bit lost with the description of a potential solution. It
> seemed like you were unaware that there is a local lock and a shared
> lock table, maybe just me?

No, I'm not unaware of the local lock table.  The point of this
proposal is to avoid fighting over the LWLocks that protect the shared
hash table by allowing some locks to be taken without touching it.

> Design seemed relatively easy from there: put local lock table in
> shared memory for all procs. We then have a use_strong_lock at proc
> and at transaction level. Anybody that wants a strong lock first sets
> use_strong_lock at proc and transaction level, then copies all local
> lock data into shared lock table, double checking
> TransactionIdIsInProgress() each time. Then queues for lock using the
> now fully set up shared lock table. When transaction with strong lock
> completes we do not attempt to reset transaction level boolean, only
> at proc level, since DDL often occurs in groups and we want to avoid
> flip-flopping quickly between lock share states. Cleanup happens by
> regularly by bgwriter, perhaps every 10 seconds or so. All locks are
> still visible for pg_locks.

I'm not following this...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: tackling full page writes
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: Reducing overhead of frequent table locks