Re: reducing the overhead of frequent table locks, v4
От | Jeff Davis |
---|---|
Тема | Re: reducing the overhead of frequent table locks, v4 |
Дата | |
Msg-id | 1309975340.3012.138.camel@jdavis обсуждение исходный текст |
Ответ на | Re: reducing the overhead of frequent table locks, v4 (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: reducing the overhead of frequent table locks, v4
Re: reducing the overhead of frequent table locks, v4 |
Список | pgsql-hackers |
On Thu, 2011-06-30 at 19:25 -0400, Robert Haas wrote: > I'm really hurting > for is some code review. I'm trying to get my head into this patch. I have a couple questions: Does this happen to be based on some academic research? I don't necessarily expect it to be; just thought I'd ask. Here is my high-level understanding of the approach, please correct me where I'm mistaken: Right now, concurrent activity on the same object, even with weak locks, causes contention because everything has to hit the same global lock partition. Because we expect an actual conflict to be rare, this patch kind of turns the burden upside down such that:(a) those taking weak locks need only acquire a lock on their own lock in their own PGPROC, which means that it doesn't contend with anyone else taking out a weak lock; and(b) taking out a strong lock requires a lot more work, because it needs to look at every backend in the proc array to see if it has conflicting locks. Of course, those things both have some complexity, because the operations need to be properly synchronized. You force a valid schedule by using the memory synchronization guarantees provided by taking those per-backend locks rather than a centralized lock, thus still avoiding lock contention in the common (weak locks only) case. Regards,Jeff Davis
В списке pgsql-hackers по дате отправления: