reducing the overhead of frequent table locks, v3
От | Robert Haas |
---|---|
Тема | reducing the overhead of frequent table locks, v3 |
Дата | |
Msg-id | BANLkTikyb_UqDM0pz2ALzXaAbtjsnS4iSg@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: reducing the overhead of frequent table locks, v3
(Noah Misch <noah@leadboat.com>)
|
Список | pgsql-hackers |
Here's a third version of the patch. Aside from some minor rebasing and a few typo corrections, the main change is that I've fixed GetLockConflicts() to do something sensible. Thus far, locks taken via the fast-path mechanism are not shown in pg_locks. I've been mulling over what to do about that. It's a bit tricky to show a snapshot of the locks in a way that's guaranteed to be globally consistent, because you'd need to seize one lock per backend plus one lock per lock manager partition, which will typically exceed the maximum number of LWLocks that can be simultaneously held by a single backend. And if you don't do that, then you must either scan the per-backend queues first and then the lock manager partitions, or the other way around. Since locks can bounce from the per-backend queues to the primary lock table, the first offers the possibility of seeing the same lock twice, while the second offers the possibility of missing it altogether. I'm inclined to scan the per-backend queues first and just document that in rare cases you may see duplicate entries. We could also de-duplicate before returning results but I doubt it's worth the trouble. Anyway, opinions? A related question is whether a fast-path lock should be displayed differently in pg_locks than one which lives in the primary lock table. We could add a new boolean (or "char") column to pg_locks to mark locks as fast-path or not, or maybe change the granted column to a three-valued column (fast-path-granted, normal-granted, waiting). Or we could omit to distinguish. Again, opinions? One other concern, which Noah and I discussed previously, is happens when someone tries to take a strong table lock (say, AccessExclusiveLock) and many other backends already have fast-path locks on the table. Transferring those locks to the primary lock table might fail halfway through, possibly leading to shared memory exhaustion. While that's always possible for any lock acquisition, it's currently the case that all locks are on equal footing, needing enough shared memory for at most one LOCK and at most one PROCLOCK. This change makes strong table locks more likely to be victims than other types of locks. Initially, my gut feeling was to worry about this, but the more I think about it, the less worried I feel. First, in any situation where this happens, the current code would have starting chucking errors sooner. Second, you have to imagine that the system is sitting there in a steady state where the lock table memory is perennially aaaaaaalmost exhausted, but never quite goes over the edge. That just doesn't seem very likely - processes take and release locks all the time, and it's hard to imagine sitting right on the brink of disaster without ever crossing over. If you do manage to have such a system, you probably ought to raise max_locks_per_transaction rather than continuing to live dangerously. Basically, although I can imagine a theoretical way this could be an annoying problem, I can't really imagine a realistic test case that would hit it. Anyway, that's where I'm at. Reviewing, testing, etc. appreciated. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Вложения
В списке pgsql-hackers по дате отправления:
Предыдущее
От: Florian PflugДата:
Сообщение: Re: Detailed documentation for external calls (threading, shared resources etc)