Re: dynamically allocating chunks from shared memory

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: dynamically allocating chunks from shared memory
Дата
Msg-id AANLkTikaBL05C1RSe3Ggn7+fnOyrf=3Ba4LtEPj06iL_@mail.gmail.com
обсуждение исходный текст
Ответ на Re: dynamically allocating chunks from shared memory  (Alvaro Herrera <alvherre@commandprompt.com>)
Ответы Re: dynamically allocating chunks from shared memory  (Markus Wanner <markus@bluegap.ch>)
Список pgsql-hackers
On Tue, Jul 20, 2010 at 5:46 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Excerpts from Markus Wanner's message of mar jul 20 14:54:42 -0400 2010:
>
>> > With respect to imessages specifically, what is the motivation for
>> > using shared memory rather than something like an SLRU?  The new
>> > LISTEN implementation uses an SLRU and handles variable-size messages,
>> > so it seems like it might be well-suited to this task.
>>
>> Well, imessages predates the new LISTEN implementation by some moons.
>> They are intended to replace (unix-ish) pipes between processes. I fail
>> to see the immediate link between (S)LRU and inter-process message
>> passing. It might be more useful for multiple LISTENers, but I bet it
>> has slightly different semantics than imessages.
>
> I guess what Robert is saying is that you don't need shmem to pass
> messages around.  The new LISTEN implementation was just an example.
> imessages aren't supposed to use it directly.  Rather, the idea is to
> store the messages in a new SLRU area.  Thus you don't need to mess with
> dynamically allocating shmem at all.

Right.  I might be full of bull, but that's what I'm saying.  :-)

>> But to be honest, I don't know too much about the new LISTEN
>> implementation. Do you think a loss-less
>> (single)-process-to-(single)-process message passing system could be
>> built on top of it?
>
> I don't think you should build on top of LISTEN but of slru.c.  This is
> probably more similar to multixact (see multixact.c) than to the new
> LISTEN implementation.
>
> I think it should be rather straightforward.  There would be a unique
> append-point; each process desiring to send a new message to another
> backend would add a new message at that point.  There would be one read
> pointer per backend, and it would be advanced as messages are consumed.
> Old segments could be trimmed as backends advance their read pointer,
> similar to how sinval queue is handled.

If the messages are mostly unicast, it might be nice if to contrive a
method whereby backends didn't need to explicitly advance over
messages destined only for other backends.  Like maybe allocate a
small, fixed amount of shared memory sufficient for two "pointers"
into the SLRU area per backend, and then use the SLRU to store each
message with a header indicating where the next message is to be
found.  For each backend, you store one pointer to the first queued
message and one pointer to the last queued message.  New messages can
be added by making the current last message point to a newly added
message and updating the last message pointer for that backend.  You'd
need to think about the locking and reference counting carefully to
make sure you eventually freed up unused pages, but it seems like it
might be doable.  Of course, if the messages are mostly multi/anycast,
or if the rate of messaging is low enough that the aforementioned
complexity is not worth bothering with, then, what you said.

One big advantage of attacking the problem with an SLRU is that
there's no fixed upper limit on the amount of data that can be
enqueued at any given time.  You can spill to disk or whatever as
needed (although hopefully you won't normally do so, for performance
reasons).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Patch for 9.1: initdb -C option
Следующее
От: KaiGai Kohei
Дата:
Сообщение: Re: Patch for 9.1: initdb -C option