Re: pg15b2: large objects lost on upgrade

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: pg15b2: large objects lost on upgrade
Дата
Msg-id CA+TgmoYmDLSf5RbsXDFcry=TROVvCv5jKbPa_PQ0mWq8LqSyaA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg15b2: large objects lost on upgrade  (Justin Pryzby <pryzby@telsasoft.com>)
Список pgsql-hackers
On Fri, Jul 8, 2022 at 11:53 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
> pg_upgrade drops template1 and postgres before upgrading:

Hmm, but I bet you could fiddle with template0. Indeed what's the
difference between a user fiddling with template0 and me committing a
patch that bumps catversion? If the latter doesn't prevent pg_upgrade
from working when the release comes out, why should the former?

> I don't have a deep understanding why the DB hasn't imploded at this point,
> maybe related to the filenode map file, but it seems very close to being
> catastrophic.

Yeah, good example.

> It seems like pg_upgrade should at least check that the new cluster has no
> objects with either OID or relfilenodes in the user range..

Well... I think it's not quite that simple. There's an argument for
that rule, to be sure, but in some sense it's far too strict. We only
preserve the OIDs of tablespaces, types, enums, roles, and now
relations. So if you create any other type of object in the new
cluster, like say a function, it's totally fine. You could still fail
if the old cluster happens to contain a function with the same
signature, but that's kind of a different issue. An OID collision for
any of the many object types for which OIDs are not preserved is no
problem at all.

But even if we restrict ourselves to talking about an object type for
which OIDs are preserved, it's still not quite that simple. For
example, if I create a relation in the new cluster, its OID or
relfilenode might be the same as a relation that exists in the old
cluster. In such a case, a failure is inevitable. We're definitely in
big trouble, and the question is only whether pg_upgrade will notice.
But it's also possible that, either by planning or by sure dumb luck,
neither the relation nor the OID that I've created in the new cluster
is in use in the old cluster. In such a case, the upgrade can succeed
without breaking anything, or at least nothing other than our sense of
order in the universe.

Without a doubt, there are holes in pg_upgrade's error checking that
need to be plugged, but I think there is room to debate exactly what
size plug we want to use. I can't really say that it's definitely
stupid to use a plug that's definitely big enough to catch all the
scenarios that might break stuff, but I think my own preference would
be to try to craft it so that it isn't too much larger than necessary.
That's partly because I do think there are some scenarios in which
modifying the new cluster might be the easiest way of working around
some problem, but also because, as a matter of principle, I like the
idea of making rules that correspond to the real dangers. If we write
a rule that says essentially "it's no good if there are two relations
sharing a relfilenode," nobody with any understanding of how the
system works can argue that bypassing it is a sensible thing to do,
and probably nobody will even try, because it's so obviously bonkers
to do so. It's a lot less obviously bonkers to try to bypass the
broader prohibition which you suggest should never be bypassed, so
someone may do it, and get themselves in trouble.

Now I'm not saying such a person will get any sympathy from this list.
If for example you #if out the sanity check and hose yourself, people
here are, including me, are going to suggest that you've hosed
yourself and it's not our problem. But ... the world is full of
warnings about problems that aren't really that serious, and sometimes
those have the effect of discouraging people from taking warnings
about very serious problems as seriously as they should. I know that I
no longer panic when the national weather service texts me to say that
there's a tornado or a flash flood in my area. They've just done that
too many times when there was no real issue with which I needed to be
concerned. If I get caught out by a tornado at some point, they're
probably going to say "well that's why you should always take our
warnings seriously," but I'm going to say "well that's why you
shouldn't send spurious warnings."

> > Perhaps a better solution to this particular problem is to remove the
> > backing files for the large object table and index *before* restoring
> > the dump, deciding what files to remove by asking the running server
> > for the file path. It might seem funny to allow for dangling pg_class
> > entries, but we're going to create that situation for all other user
> > rels anyway, and pg_upgrade treats pg_largeobject as a user rel.
>
> I'll think about it more later.

Sounds good.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: James Vanns
Дата:
Сообщение: Weird behaviour with binary copy, arrays and column count
Следующее
От: Nathan Bossart
Дата:
Сообщение: Re: Avoid erroring out when unable to remove or parse logical rewrite files to save checkpoint work