Re: Improve LWLock tranche name visibility across backends
От | Sami Imseih |
---|---|
Тема | Re: Improve LWLock tranche name visibility across backends |
Дата | |
Msg-id | CAA5RZ0v5eLMHu+C0xRQmN==fXi3b8rsL66X0ScuqgDdSs4+m5w@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Improve LWLock tranche name visibility across backends (Bertrand Drouvot <bertranddrouvot.pg@gmail.com>) |
Список | pgsql-hackers |
> > > Unlike the local list of tranche names, appending to and searching the > > > shared memory list requires an LWLock; in exclusive mode to append, and > > > shared mode to search. > > > > The thing that stood out the most to me is how much more expensive looking > > up the lock name is. At the risk of suggesting even more complexity, I'm > > wondering if we should add some sort of per-backend cache to avoid the need > > for locking and dsa_get_address() calls every time you need to look up a > > lock name. > > Yeah, I 've the same concern that GetLWTrancheNameByID() might be too costly. > OTOH it's not really used in a hot path, "only" when displaying wait events. LWLockTrancheGetName also gets called, via T_NAME, when trace_lwlock is enabled. This is a developer option that requires LOCK_DEBUG to be set at compile time. So, that path may become even slower than it currently is under high concurrency. Also, if dtrace is enabled and the specific LWLock probe is used. ``` if (TRACE_POSTGRESQL_LWLOCK_WAIT_START_ENABLED()) TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode); ``` But both of the above cases are not used in normal production activity. For the case of dtrace, b94409a02f612 added the _ENABLED macros to ensure that GetLWTrancheName ( via T_NAME) is only called when necessary. " If dtrace is compiled in but disabled, the lwlock dtrace probes still evaluate their arguments. Since PostgreSQL 13, T_NAME(lock) does nontrivial work, so it should be avoided if not needed. To fix, make these calls conditional on the *_ENABLED() macro corresponding to each probe. " The normal case is when pg_stat_get_activity calls pgstat_get_wait_event to retrieve the wait event via GetLWLockIdentifier. However, as mentioned, this is not on the hot path. I think we could add a local backend copy that stays up to date with the DSA. One idea would be to use an atomic counter to track the number of entries in the DSA and compare it with a local backend counter whenever the tranche name lookup occurs. If the atomic counter is higher (since we don't have deletions), we can update the local copy. Updating the local table should be a rare occurrence, but it would require an additional atomic fetch every time the name lookup occurs, in all the above code paths. Perhaps there's a better approach? -- Sami
В списке pgsql-hackers по дате отправления: