Re: dynamic shared memory and locks

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: dynamic shared memory and locks
Дата
Msg-id CA+TgmobiU1w5SqbAt8ymqc9Ue9uqEyKZTXDrf4w_zSLN=v=AuA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: dynamic shared memory and locks  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: dynamic shared memory and locks  (Stephen Frost <sfrost@snowman.net>)
Список pgsql-hackers
On Sun, Jan 5, 2014 at 2:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I seem to recall that there was some good reason for keeping all the
> LWLocks in an array, back when the facility was first designed.
> I'm too lazy to research the point right now, but you might want to
> go back and look at the archives around when lwlock.c was written.

To some extent it's an orthogonal question.  It's true that there
isn't much point in using LWLock * rather than LWLockId to refer to
LWLocks unless we wish to be able to store them outside the shared
memory segment, but the reverse is not true: just because we have the
ability to move things outside the main array in the general case
doesn't make it a good idea in any particular case.  Andres's email
seems to indicate that he sees performance advantages in moving buffer
locks elsewhere, and I have a sneaking suspicion that we'll find that
it's more convenient to move some other things around as well, but
that's policy, not mechanism.  Very little of the core LWLock
machinery actually cares how the locks are stored, so the attached
patch to make it use LWLock * rather than LWLockId as a handle is
pretty straightforward.

The only real problem I see here is that we occasionally *print out*
LWLockIds as a way of identifying lwlocks.  This is not a great
system, but printing out pointers is almost certainly much worse (e.g.
because of ASLR).  The cases where this is currently an issue are:

- You try to release a lwlock you haven't acquired.  We elog(ERROR) the ID.
- You define LWLOCK_STATS.  The resulting reports are print the lock ID.
- You define LOCK_DEBUG and set trace_lwlocks=true.  We print the lock
ID in the trace messages.
- You compile with --enable-dtrace.  We pass the lock IDs to the dtrace probes.

In the attached patch I handled the first case by printing the pointer
(which I don't feel too bad about) and the remaining three cases by
leaving them broken.  I wouldn't shed a tear about ripping out
trace_lwlocks, but LWLOCK_STATS is useful and I bet somebody is using
--enable-dtrace, so we probably need to fix those cases at least.  I
suppose one option is to make LWLOCK_STATS and the dtrace probes only
look at locks in the main array and just ignore everything else.  But
that's kind of crappy, especially if we're going to soon move buffer
locks out of the main array.

Another idea is to include some identifying information in the lwlock.
 For example, each lwlock could have a char *name in it, and we could
print the name.  In theory, this could be a big step forward in terms
of usability, because it'd spare us all needing to remember that 4 ==
ProcArrayLock.  But it's awkward for buffer locks, of which there
might be a great many, and we surely don't want to allocate a
dynamically-generated string in shared memory for each one.  You could
do a bit better by making the identifying information a string plus an
integer, because then all the buffer locks could set the string to a
static constant like "buffer content lock" and the integer to the
buffer number, and similarly for lock manager partition locks and so
on.  This is appealing, but would increase the size of LWLockPadded
from 16 bytes to 32 on 64-bit platforms where slock_t is four bytes or
less, which I'm not that excited about even though it would reduce
cache line contention in some cases.

Yet a third idea is to try to reverse-engineer a name for a given
lwlock from the pointer address.  If it's an offset into the main
array, this is easy enough to do, and even if we ended up with several
arrays (like one for bufmgr locks) it wouldn't be too hard to write
code to figure out which array contains it and emit the appropriate
string.  The only real problem that I see with this is that it might
cause a performance hit.  A performance hit when running with
trace_lwlocks or LWLOCK_STATS is not really a problem, but people
won't like if --enable-dtrace slow things up.

Preferences, other ideas?

None of these ideas are a complete solution for LWLOCK_STATS.  In the
other three cases noted above, we only need an identifier for the lock
"instantaneously", so that we can pass it off to the logger or dtrace
or whatever.  But LWLOCK_STATS wants to hold on to data about the
locks that were visited until the end of the session, and it does that
using an array that is *indexed* by lwlockid.  I guess we could
replace that with a hash table.  Ugh.  Any suggestions?

(Incidentally, while developing this patch I found a bug in the
current code: lock.c iterates over all PGPROCs from 0 to
ProcGlobal->allProcCount and takes the backendLock for each, but if
max_prepared_transactions>0 then the PGPROCs for prepared transactions
do not have a backendLock and so we take and release BufFreelistLock -
i.e. 0 - a number of times equal to max_prepared_transactions.  That's
not cool.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: In-core regression tests for replication, cascading, archiving, PITR, etc. Michael Paquier
Следующее
От: Robert Haas
Дата:
Сообщение: Re: [PATCH] Store Extension Options