Re: Proposal of tunable fix for scalability of 8.4

Поиск
Список
Период
Сортировка
От Jignesh K. Shah
Тема Re: Proposal of tunable fix for scalability of 8.4
Дата
Msg-id 49B922C0.6050700@sun.com
обсуждение исходный текст
Ответ на Re: Proposal of tunable fix for scalability of 8.4  (Scott Carey <scott@richrelevance.com>)
Ответы Re: Proposal of tunable fix for scalability of 8.4
Re: Proposal of tunable fix for scalability of 8.4
Список pgsql-performance


On 03/11/09 22:01, Scott Carey wrote:
On 3/11/09 3:27 PM, "Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:

I'm a lot more interested in what's happening between 60 and 180 than
over 1000, personally.  If there was a RAID involved, I'd put it down
to better use of the numerous spindles, but when it's all in RAM it
makes no sense.

If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense.  Fewer readers are blocked waiting on writers at any given time.  Readers can ‘cut’ in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue).  Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.  
Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.

This will even help in on a single CPU system if it is read dominated, lowering read latency and slightly increasing write latency.

If you want to make this more fair, instead of freeing all shared locks, limit the count to some number, such as the number of CPU cores.  Perhaps rather than wake-up-all-waiters=true, the parameter can be an integer representing how many shared locks can be freed at once if an exclusive lock is encountered.

Well I am waking up not just shared but shared and exclusives.. However i like your idea of waking up the next N waiters where N matches the number of cpus available.  In my case it is 64 so yes this works well since the idea being of all the 64 waiters running right now one will be able to lock the next lock  immediately and hence there are no cycles wasted where nobody gets a lock which is often the case when you say wake up only 1 waiter and hope that the process is on the CPU (which in my case it is 64 processes) and it is able to acquire the lock.. The probability of acquiring the lock within the next few cycles is much less for only 1 waiter  than giving chance to 64 such processes  and then let them fight based on who is already on CPU  and acquire the lock. That way the period where nobody has a lock is reduced and that helps to cut out "artifact"  idle time on the system.


As soon as I get more "cycles" I will try variations of it but it would help if others can try it out in their own environments to see if it helps their instances.


-Jignesh

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Proposal of tunable fix for scalability of 8.4
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Proposal of tunable fix for scalability of 8.4