Re: Horribly slow pg_upgrade performance with many Large Objects
От | Tom Lane |
---|---|
Тема | Re: Horribly slow pg_upgrade performance with many Large Objects |
Дата | |
Msg-id | 4044567.1744128814@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Horribly slow pg_upgrade performance with many Large Objects (Nathan Bossart <nathandbossart@gmail.com>) |
Ответы |
Re: Horribly slow pg_upgrade performance with many Large Objects
|
Список | pgsql-hackers |
Nathan Bossart <nathandbossart@gmail.com> writes: > On Tue, Apr 08, 2025 at 09:35:24AM +0200, Hannu Krosing wrote: >> Changing the LO export to dumping pg_largeobject_metadata content >> instead of creating the LOs should be a nice small change confined to >> pg_dump --binary-upgrade only so perhaps we could squeeze it in v18 >> still. > Feature freeze for v18 was ~4 hours ago, so unfortunately this is v19 > material at this point. Yeah, even if we had a patch in hand, it's too late for v18. However there are additional problems with this idea: 1. The idea requires role OIDs to match across the upgrade. I don't believe that pg_upgrade tries to preserve role OIDs --- and doing so would be problematic, because what if the new cluster's bootstrap superuser is named differently in the old and new clusters? It might be possible to work around that with some casting to/from regrole, but I don't think a simple COPY into pg_largeobject_metadata will play along with that. 2. If you just do the equivalent of an INSERT or COPY into pg_largeobject_metadata, you could create entries that look right, but they are actually not right because there should be pg_shdepend entries backing each ownership or permission reference (for non-pinned roles) and there won't be. I guess you could think of also manually inserting rows into pg_shdepend, but (a) ugh and (b) the claimed speedup is kind of vanishing into the distance at this point. regards, tom lane
В списке pgsql-hackers по дате отправления: