dynamic shared memory and locks

Поиск
Список
Период
Сортировка
От Robert Haas
Тема dynamic shared memory and locks
Дата
Msg-id CA+TgmoaNda1f4fRTk=nZXAJhjOokEwRhyOhSyjo618gYKo4VhA@mail.gmail.com
обсуждение исходный текст
Ответы Re: dynamic shared memory and locks  (Andres Freund <andres@2ndquadrant.com>)
Re: dynamic shared memory and locks  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: dynamic shared memory and locks  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Список pgsql-hackers
One of the things that you might want to do with dynamic shared memory
is store a lock in it.  In fact, my bet is that almost everything that
uses dynamic shared memory will want to do precisely that, because, of
course, it's dynamic *shared* memory, which means that it is
concurrently accessed by multiple processes, which tends to require
locking.  Typically, what you're going to want are either spinlocks
(for very short critical sections) or lwlocks (for longer ones).  It
doesn't really make sense to talk about storing heavyweight locks in
dynamic shared memory, because we're talking about storing locks with
the data structures that they protect, and heavyweight locks are used
to protect database or shared objects, not shared memory structures.
Of course, someone might think of trying to provide a mechanism for
the heavyweight lock manager to overflow to dynamic shared memory, but
that's a different thing altogether and not what I'm talking about
here.

Right now, storing spinlocks in dynamic shared memory *almost* works,
but there are problems with --disable-spinlocks.  In that
configuration, we use semaphores to simulate spinlocks.  Every time
someone calls SpinLockInit(), it's going to allocate a new semaphore
which will never be returned to the operating system, so you're pretty
quickly going to run out.  There are a couple of things we could do
about this:

1. Decide we don't care.  If you compile with --disable-spinlocks, and
then you try to use dynamic shared memory, it's going to leak
semaphores until none remain, and then start failing from there until
the postmaster is restarted.  If you don't like that, provide a
working spinlock implementation for your platform.

2. Forbid the use of dynamic shared memory when compiling with
--disable-spinlocks.  This is a more polite version of #1.  It seems
likely to me that nearly every piece of code that uses dynamic shared
memory will require locking.  Instead of letting people allocate
dynamic shared memory segments anyway and then having them start
failing shortly after postmaster startup, we could just head the
problem off at the pass by denying the request for dynamic shared
memory in the first place.  Dynamic shared memory allocation can
always fail (e.g. because we're out of memory) and also has an
explicit off switch that will make all requests fail
(dynamic_shared_memory_type=none), so any code that uses dynamic
shared memory has to be prepared for a failure at that point, whereas
a failure in SpinLockInit() might be more surprising.

3. Provide an inverse for SpinLockInit, say SpinLockDestroy, and
require all code written for dynamic shared memory to invoke this
function on every spinlock before the shared memory segment is
destroyed.  I initially thought that this could be done using the
on_dsm_detach infrastructure, but it turns out that doesn't really
work.  The on_dsm_detach infrastructure is designed to make sure that
you *release* all of your locks when detaching - i.e. those hooks get
invoked for each process that detaches.  For this, you'd need an
on_dsm_final_detach callback that gets called only for the very last
detach (and after prohibiting any other processes from attaching).  I
can certainly engineer all that, but it's a decent amount of extra
work for everyone who wants to use dynamic shared memory to write the
appropriate callback, and because few people actually use
--disable-spinlocks, I think those callbacks will tend to be rather
lightly tested and thus a breeding ground for marginal bugs that
nobody's terribly excited about fixing.

4. Drop support for --disable-spinlocks.

For what it's worth, my vote is currently for #2.  I can't think of
many interesting to do with dynamic shared memory without having at
least spinlocks, so I don't think we'd be losing much.  #1 seems
needlessly unfriendly, #3 seems like a lot of work for not much, and
#4 seems excessive at least as a solution to this particular problem,
though there may be other arguments for it.  What do others think?

I think we're also going to want to be able to create LWLocks in
dynamic shared memory: some critical sections won't be short enough or
safe enough to be protected by spinlocks.  At some level this seems
easy: change LWLockAcquire and friends to accept an LWLock * rather
than an LWLockId, and similarly change held_lwlocks[] to hold LWLock
pointers rather than LWLockIds.  One tricky point is that you'd better
try not to detach a shared memory segment while you're holding lwlocks
inside that segment, but I think just making that a coding rule
shouldn't cause any great problem, and conversely you'd better release
all lwlocks in the segment before detaching it, but this seems mostly
OK: throwing an error will call LWLockReleaseAll before doing the
resource manager cleanups that will unmap the dynamic shared memory
segment, so that's probably OK too.  There may be corner cases I
haven't thought about, though.  A bigger problem is that I think we
want to avoid having a large amount of notational churn.  The obvious
way to do that is to get rid of the LWLockId array and instead declare
each fixed LWLock separately as e.g. LWLock *ProcArrayLock.  However,
creating a large number of new globals that will need to be
initialized in every new EXEC_BACKEND process seems irritating.  So
maybe a better idea is to do something like this:

#define BufFreelistLock (&fixedlwlocks[0])
#define ShmemIndexLock (&fixedlwlocks[1])
...
#define AutoFileLock (&fixedlwlocks[36])
#define NUM_FIXED_LWLOCKS 37

Comments, suggestions?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: james
Дата:
Сообщение: Re: [ANNOUNCE] IMCS: In Memory Columnar Store for PostgreSQL
Следующее
От: Robert Haas
Дата:
Сообщение: Re: [ANNOUNCE] IMCS: In Memory Columnar Store for PostgreSQL