Re: [RFC] Removing "magic" oids

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: [RFC] Removing "magic" oids
Дата
Msg-id 20181122001415.ef5bncxqin2y3esb@alap3.anarazel.de
обсуждение исходный текст
Ответ на Re: [RFC] Removing "magic" oids  (Andres Freund <andres@anarazel.de>)
Ответы Re: [RFC] Removing "magic" oids  (Andrew Dunstan <andrew.dunstan@2ndquadrant.com>)
Список pgsql-hackers
Hi,

On 2018-11-21 10:36:42 -0800, Andres Freund wrote:
> >pg_upgrade against old versions, by the look of it. Even after I drop 
> >oids from user tables, I get errors like this when running pg_dumpall 
> >against a pg_upgraded REL_9_4_STABLE datadir:

Not pg_upgrade in general, I did test that. "Just" large objects.
pg_largeobject and pg_largeobject_metadata are the only catalog tables
that we copy over from the old cluster. I'd checked that pg_largeobject
doesn't contain an oid column, but somehow missed that pg_upgrade also
copies pg_largeobject_metadata - which does contain an oid.

Part of that is me just missing that fact, but part of it is also that a
binary upgrade pg_dump actually makes it nearly entirely unnecessary to
migrate pg_largeobject_metadata. pg_dump in binary upgrade mode *does*
issue lo_create() calls, and also emits ALTER LARGE OBJECT .. OWNER TO
.. statements. Which make it largely unnecessary to copy
pg_largeobject_metadata over.  Only grants where skipped, but that's
trivial to change.

The attached fix fixes this issue "simply" by enabling the dumping of
blob ACLS in binary upgrade mode, and not copying the underlying files
over anymore. As we already emit multiple statements for each large
objects, doing so additionally for grants ought not to matter too
much. The right way to optimize that, if we wanted, would be to stop
dumping large objects metadata via pg_dump alltogether in pg_upgrade
(and make it a single COPY (SELECT) command that's restrored via COPY).

There's one comment atop of pg_upgrade that I'm not sure about anymore:
 *    We control all assignments of pg_authid.oid because these oids are stored
 *    in pg_largeobject_metadata. XXX still

That's not true anymore after this change. Turns out we currently don't
prevent regrole from being used as a column type (I'll start a thread
about insufficient reg* verification), but we could just prohibit that.
Any comments?


Note that this doesn't fix the < 9.0 path where we recreate
pg_largeobject_metadata. I basically think we should just remove support
for that version. But even if we cannot agree to that, I first want to
know if this fixes your full set of problems.


> >2018-11-21 13:01:58.582 EST [11861] ERROR:  large object 10 does not
> >exist
> >2018-11-21 13:01:58.637 EST [11686] ERROR:  could not open file 
> >"base/17486/2840_vm": No such file or directory
> >2018-11-21 13:01:58.637 EST [11686] CONTEXT:  writing block 0 of 
> >relation base/17486/2840_vm
> >2018-11-21 13:01:59.638 EST [11686] ERROR:  could not open file 
> >"base/17486/2840_vm": No such file or directory
> >2018-11-21 13:01:59.638 EST [11686] CONTEXT:  writing block 0 of 
> >relation base/17486/2840_vm
> >2018-11-21 13:01:59.638 EST [11686] WARNING:  could not write block 0
> >of 
> >base/17486/2840_vm

I was not able to reproduce this issue.  Could you check whether you
still encounter the issue after applying the attached fix? And if so,
provide me with more detailed instructions on how to reproduce?

Greetings,

Andres Freund

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: reg* checks in pg_upgrade are out of date
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: pg_upgrade supported versions policy