Re: Catalog/Metadata consistency during changeset extraction from wal

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Catalog/Metadata consistency during changeset extraction from wal
Дата
Msg-id CA+TgmoYTNLG5tmtyEcZfMnzrMYKBS=iF+YKE2+K_fEmKd=+ekg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Catalog/Metadata consistency during changeset extraction from wal  (Andres Freund <andres@2ndquadrant.com>)
Ответы Re: Catalog/Metadata consistency during changeset extraction from wal  (Amit Kapila <amit.kapila@huawei.com>)
Re: Catalog/Metadata consistency during changeset extraction from wal  (Andres Freund <andres@2ndquadrant.com>)
Список pgsql-hackers
On Sun, Jun 24, 2012 at 5:11 PM, Andres Freund <andres@2ndquadrant.com> wrote:
> There are some interesting problems related to locking and snapshots here. Not
> sure if they are resolvable:
>
> We need to restrict SnapshotNow to represent to the view it had back when the
> wal record were currently decoding had. Otherwise we would possibly get wrong
> column types and similar. As were working in the past locking doesn't protect
> us against much here. I have that (mostly and inefficiently).
>
> One interesting problem are table rewrites (truncate, cluster, some ALTER
> TABLE's) and dropping tables. Because we nudge SnapshotNow to the past view it
> had back when the wal record was created we get the old relfilenode. Which
> might have been dropped in part of the transaction cleanup...
> With most types thats not a problem. Even things like records and arrays
> aren't problematic. More interesting cases include VACUUM FULL $systable (e.g.
> pg_enum) and vacuum full'ing a table which is used in the *_out function of a
> type (like a user level pg_enum implementation).
>
> The only theoretical way I see against that problem would be to postpone all
> relation unlinks untill everything that could possibly read them has finished.
> Doesn't seem to alluring although it would be needed if we ever move more
> things of SnapshotNow.
>
> Input/Ideas/Opinions?

Yeah, this is slightly nasty.  I'm not sure whether or not there's a
way to make it work.

I had another idea.  Suppose decoding happens directly on the primary,
because I'm still hoping there's a way to swing that.  Suppose further
that we handle DDL by insisting that (1) any backend which wants to
add columns or change the types of existing columns must first wait
for logical replication to catch up and (2) if a backend which has
added columns or changed the types of existing columns then writes to
the modified table, decoding of those writes will be postponed until
transaction commit.  I think that's enough to guarantee that the
decoding process can just use the catalogs as they stand, with plain
old SnapshotNow.

The downside of this approach is that it makes certain kinds of DDL
suck worse if logical replication is in use and behind.  But I don't
necessarily see that as prohibitive because (1) logical replication
being behind is likely to suck for a lot of other reasons too and (2)
adding or retyping columns isn't a terribly frequent operation and
people already expect a hit when they do it.  Also, I suspect that we
could find ways to loosen those restrictions at least in common cases
in some future version; meanwhile, less work now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: Catalog/Metadata consistency during changeset extraction from wal
Следующее
От: Robert Haas
Дата:
Сообщение: Re: libpq compression