Re: Horribly slow pg_upgrade performance with many Large Objects
От | Hannu Krosing |
---|---|
Тема | Re: Horribly slow pg_upgrade performance with many Large Objects |
Дата | |
Msg-id | CAMT0RQStPtHfKwowd88Q0tynX0x=uJSKn=ihP8syhDJ6cH3DHQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Horribly slow pg_upgrade performance with many Large Objects (Nathan Bossart <nathandbossart@gmail.com>) |
Ответы |
Re: Horribly slow pg_upgrade performance with many Large Objects
|
Список | pgsql-hackers |
On Tue, Jul 8, 2025 at 11:06 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Sun, Jul 06, 2025 at 02:48:08PM +0200, Hannu Krosing wrote: > > Did a quick check of the patch and it seems to work ok. > > Thanks for taking a look. > > > What do you think of the idea of not dumping pg_shdepend here, but > > instead adding the required entries after loading > > pg_largeobject_metadata based on the contents of it ? > > While not dumping it might save a little space during upgrade, the query > seems to be extremely slow. So, I don't see any strong advantage. Yeah, looks like the part that avoids duplicates made it slow. If you run it without the last WHERE it is reasonably fast. And it behaves the same as just inserting from the dump which also does not have any checks against duplicates.
В списке pgsql-hackers по дате отправления: