Re: dynamic shared memory and locks

Поиск
Список
Период
Сортировка
От Kohei KaiGai
Тема Re: dynamic shared memory and locks
Дата
Msg-id CADyhKSWGrrx=oyAavy066QTue6hC=B5MO1X8utm-gkorm2vJrA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: dynamic shared memory and locks  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: dynamic shared memory and locks  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
2014-02-08 4:52 GMT+09:00 Robert Haas <robertmhaas@gmail.com>:
> On Tue, Jan 21, 2014 at 11:37 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> One idea I just had is to improve the dsm_toc module so that it can
>> optionally set up a tranche of lwlocks for you, and provide some
>> analogues of RequestAddinLWLocks and LWLockAssign for that case.  That
>> would probably make this quite a bit simpler to use, at least for
>> people using it with dynamic shared memory.  But I think that's a
>> separate patch.
>
> I played with this a bit today and it doesn't actually seem to
> simplify things very much.  The backend that creates the DSM needs to
> do this:
>
>     lwlocks = shm_toc_allocate(toc, sizeof(LWLockPadded) * nlwlocks);
>     for (i = 0; i < nlwlocks; ++i)
>         LWLockInitialize(&lwlocks[i].lock, tranche_id);
>
> Since that's all of three lines, encapsulating it doesn't look all
> that helpful.  Then each backend needs to do something like this:
>
>     static LWLockTranche mytranche;
>     mytranche.name = "some descriptive module name";
>     mytranche.array_base = lwlocks;
>     mytranche.array_stride = sizeof(LWLockPadded);
>     LWLockRegisterTranche(tranche_id, &mytranche);
>
> That's not a lot of code either, and there's no obvious way to reduce
> it much by hooking into shm_toc.
>
> I thought maybe we needed some cleanup when the dynamic shared memory
> segment is unmapped, but looks like we really don't.  One can do
> something like LWLockRegisterTranche(tranche_id, NULL) for debugging
> clarity, but it isn't strictly needed; one can release all lwlocks
> from the tranche or assert that none are held, but that should really
> only be a problem if the user does something like
> LWLockAcquire(lock_in_the_segment); dsm_detach(seg), because error
> recovery starts by releasing *all* lwlocks.  And if the user does it
> explicitly, I'm kinda OK with that just seg faulting.  After all, the
> user could equally well have done dsm_detach(seg);
> LWLockAcquire(lock_in_the_segment) and there's no way at all to
> cross-check for that sort of mistake.
>
Does it make another problem if dsm_detach() also releases lwlocks
being allocated on the dsm segment to be released?
Lwlocks being held is tracked in the held_lwlocks[] array; its length is
usually 100. In case when dsm_detach() is called towards the segment
between addr ~ (addr + length), it seems to me an easy job to walk on
this array to find out lwlocks on the range.


> I do see one thing about the status quo that does look reasonably
> annoying: the code expects that tranche IDs are going to stay
> relatively small.  For example, consider a module foobar that uses DSM
> + LWLocks.  It won't do at all for each backend, on first use of the
> foobar module, to do LWLockNewTrancheId() and then reuse that
> tranche_id repeatedly for each new DSM - because over a system
> lifetime of months, tranche IDs could grow into the millions, causing
> LWLockTrancheArray to get really big (and eventually repalloc will
> fail).  Rather, the programmer needs to ensure that
> LWLockNewTrancheId() gets called *once per postmaster lifetime*,
> perhaps by allocating a chunk of permanent shared memory and using
> that to store the tranche_id that should be used each time an
> individual backend fires up a DSM.  Considering that one of the goals
> of dynamic shared memory is to allow modules to be loaded after
> postmaster startup and still be able to do useful stuff, that's kind
> of obnoxious.  I don't have a good idea what to do about it, though;
> making LWLockTrancheArray anything other than a flat array looks
> likely to slow down --enable-dtrace builds unacceptably.
>
Just my rough idea. Doesn't it make sense to have an offset from the
head of DSM segment that contains the lwlock, instead of the identifier
form, to indicate a cstring datum? It does not prevent to allocate it
later; after the postmaster starting up, and here is no problem if number
of dynamic lwlocks grows millions or more.
If lwlock has a "tranche_offset", not "tranche_id", all it needs to do is:
1. find out either of DSM segment or regular SHM segment that contains   the supplied lwlock.
2. Calculate mapped_address + tranche_offset; that is the local location   where cstring form is put.

Probably, it needs a common utility routine on dsm.c to find out
a particular DSM segment that contains the supplied address.
(Inverse function towards dsm_segment_address)

How about my ideas?

Thanks,
-- 
KaiGai Kohei <kaigai@kaigai.gr.jp>



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Merlin Moncure
Дата:
Сообщение: Re: jsonb and nested hstore
Следующее
От: Andres Freund
Дата:
Сообщение: Re: jsonb and nested hstore