Re: NOTIFY with tuples

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: NOTIFY with tuples
Дата
Msg-id CA+TgmoYL_US-K=NcWXkDgpHphNSqLNgyjW4gGWk98aDjwiz8Kg@mail.gmail.com
обсуждение исходный текст
Ответ на NOTIFY with tuples  (Thomas Munro <munro@ip9.org>)
Ответы Re: NOTIFY with tuples  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Tue, Dec 13, 2011 at 6:30 PM, Thomas Munro <munro@ip9.org> wrote:
> It seems there are number of academic and commercial
> systems (StreamSQL [1], CQL [2], ...)  which provide powerful
> queryable streams of tuples, including windowing, grouping,
> joining and pipelining facilities, all of which are far beyond
> what I have been picturing.
>
> I imagine a very simple system like this, somehow built on top of
> the existing NOTIFY infrastructure:
>
>  CREATE STREAM foo (sensor INTEGER, temperature NUMERIC);
>
> In session A:
>
>  INSERT INTO foo VALUES (42, 99.0);
>  INSERT INTO foo VALUES (99, 100.0);
>  COMMIT;
>
> Meanwhile in session B:
>
>  SELECT * FROM foo;
>
> And perhaps even some simple filtering:
>
>  SELECT * FROM foo WHERE sensor = 42;
>
> I don't know how you would first signal your interest in foo
> before you can start SELECTing from it... perhaps with LISTEN.
>
> I suppose running the SELECT query on a stream would return only
> tuples that are queued up and ready to fetch, without blocking to
> wait for more, and a client could execute the query repeatedly,
> using select() on the file descriptor to wait for data to be
> ready (the same way people currently wait between calls to
> PGnotifies).
>
> As for implementation details, I haven't done much research yet
> into how something like this would be done and am very new to the
> source tree, but I thought I'd present this idea and see if it's
> a duplicate effort, or someone has a much better idea, or it is
> instantly shot down in flames for technical or other reasons,
> before investing further in it.

I'm not sure whether we'd want something like this in core, so for a
first go-around, you might want to consider building it as an
extension.  It might work to just decree that each stream must be
built around a composite type.  Then you could do this:

pg_create_stream(regclass) - create a stream based on the given composite type
pg_destroy_stream(regclass) - nuke the stream
pg_subscribe_stream(regclass) - current backend wants to read from the stream
pg_unsubscribe_stream(regclass) - current backend no longer wants to
read from the stream

The function pg_create_stream() could create reader and writer
functions for the stream.  For example, if the composite type were
called "foo", then you'd end up with foo_read() returning SETOF foo
and foo_write(foo) returning void.  The C functions would look at the
argument types to figure out which stream they were operating on.  The
writer function store all the tuples written to the stream into a temp
file with a name based on the composite type OID.  The reader function
would return all tuples added to the temp file since the last read.
You'd want the last read locations for all the subscribers stored in
the file (or another file) somewhere so that when the furthest-back
reader read the data, it could figure out which data was no longer
need it and arrange for it to be truncated away.  I'm not sure you
need NOTIFY for anything anywhere in here.

All in all, this is probably a pretty complicated project, but I'm
sure there are people who would use it.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jan Wieck
Дата:
Сообщение: Re: Command Triggers
Следующее
От: Tom Lane
Дата:
Сообщение: Race condition in HEAD, possibly due to PGPROC splitup