PgQ and pg_dump

Поиск
Список
Период
Сортировка
От Martín Marqués
Тема PgQ and pg_dump
Дата
Msg-id d86dd685-1870-cfa0-e5e4-def1f918bec9@2ndquadrant.com
обсуждение исходный текст
Ответы Re: PgQ and pg_dump  (Michael Paquier <michael.paquier@gmail.com>)
Список pgsql-general
Hi,

I was working on a PgQ installation and found something odd which I'd
like to see if others here have bumped into regarding using pg_dump on a
database that has the pgq schema created by the extension.

If PgQ is installed as an extension (by executing CREATE EXTENSION pgq)
all the objects created by the extension will depend on it, and so will
have entries in pg_depend for all of them with deptype e. (these are the
objects that pg_dumps ignores as they will be created by the extension)

The problem is that the pgq.sql creates the pgq schema, and so that
object won't get dumped, same as with all the other objects created in
that shema, including the events tables created by pgq.create_queue().

I wonder if this is the desirable way of handling pgq, or if those
tables should be dumped. I'm starting to think that this is a PgQ bug,
or maybe it's not a good idea to install PgQ as an extension.

This happens because PgQ was installed as an extension, as opposed to
just passing the pgqd.sql file to psql, or having the schemata, tables
and functions created by londiste (maybe the most common way nowadays).

Is it sensible to have all the pgq* schemata recreated (and empty) when
restoring a dump or not?

Regards,

--
Martín Marqués                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


В списке pgsql-general по дате отправления:

Предыдущее
От: Rupesh Choudhary
Дата:
Сообщение: Data ingestion failing when using higher Batch size
Следующее
От: Ben Buckman
Дата:
Сообщение: Re: Using a VIEW as a temporary mechanism for renaming a table