Re: Logical replication and multimaster

Поиск
Список
Период
Сортировка
От Craig Ringer
Тема Re: Logical replication and multimaster
Дата
Msg-id CAMsr+YF7vP72LzMmNhe0ppZg-KnEt8d_RXzyjZaZMNykmQEiVQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Logical replication and multimaster  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Logical replication and multimaster  (David Fetter <david@fetter.org>)
Список pgsql-hackers
On 12 December 2015 at 02:24, Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Dec 11, 2015 at 5:16 AM, Andres Freund <andres@anarazel.de> wrote:
> On 2015-12-11 18:12:55 +0800, Craig Ringer wrote:
>> On 10 December 2015 at 03:19, Robert Haas <robertmhaas@gmail.com> wrote:
>> > On Sun, Dec 6, 2015 at 10:24 PM, Craig Ringer <craig@2ndquadrant.com>
>> > wrote:
>> > > * A way to securely make a libpq connection from a bgworker without
>> > messing
>> > > with passwords etc. Generate one-time cookies, sometihng like that.
>> >
>> > Why would you have the bgworker connect to the database via TCP
>> > instead of just doing whatever it wants to do directly?
>
>> pg_dump and pg_restore, mainly, for copying the initial database state.
>
> Well, you don't want to necessarily directly connect from the bgworker,
> but from processes started from a bgworker. I guess that's where a good
> bit of the Robert's confusion originated.

That's part of it, yeah.  I'm a little scared of this design.  I mean,
I understand now why Craig wants to do this (thanks for explaining,
Craig!), but it seems like it's going to have a lot of the same
reliability problems that pg_upgrade does.

Yes, and more.

Especially when dealing with multiple upstream servers, etc.

It's not very nice. I would very much prefer to have a better way to achieve the initial data sync, but at present I don't think there is any better approach that's even remotely practical.

I'm not saying there's a better way to get the functionality

Yup. That's the problem.
 
but it's pretty obvious that depending on tools other than the server itself, and in particular
pg_dump, vastly increases the failure surface area.

 It's not too bad to find pg_dump, though we landed up not being able to re-use find_other_exec for various reasons I'll have to try to dig out of the cruftier corners of my memory.  It has a fairly sane interface too.

Things get hairy when you want to do things like "give me all the upstream's non-table objects, then give me [this set of table definitions]"... then you go and sync the data from an exported snapshot using COPY, then finish up by restoring the constraints for the set of tables you dumped.

Being able to access pg_dump and pg_restore's dependency resolution logic, object dumping routines, etc from regular SQL and from the SPI would be wonderful.

I believe the main complaints about doing that when it was discussed in the past were issues with downgrading. A pg_get_tabledef(...) in 9.6 might emit keywords etc that a 9.2 server wouldn't understand, and the way we currently solve this is to require that you run 9.2's pg_dump against the 9.6 server to get a 9.2-compatible dump. So if we had pg_get_blahdef functions we'd still need external versions, so why bother having them?

The alternative is to have all the get_blahdef functions accept a param for server version compatibility, which would work but burden future servers with knowledge about older versions' features and corresponding code cruft for some extended period of time.

So it's gone nowhere to date.

For that matter it's not clear that pg_get_blahdef functions would be the right solution, but I can't see directly poking around in the catalogs and basically re-implementing pg_dump being OK either. So what else could we do?


--
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Albe Laurenz
Дата:
Сообщение: Re: Disabling an index temporarily
Следующее
От: Julien Rouhaud
Дата:
Сообщение: Re: Disabling an index temporarily