Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Alexander Korotkov
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id CAPpHfduUSA7c0YrTxN443SwXXCL41vmXkavYJjN8GS3Wc5Z42w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_upgrade failing for 200+ million Large Objects  (Justin Pryzby <pryzby@telsasoft.com>)
Ответы Re: pg_upgrade failing for 200+ million Large Objects
Re: pg_upgrade failing for 200+ million Large Objects
Список pgsql-hackers
Hi, Justin!

Thank you for sharing this.

On Wed, Jul 24, 2024 at 5:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
> On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:
> > Nathan Bossart <nathandbossart@gmail.com> writes:
> > > The one design point that worries me a little is the non-configurability of
> > > --transaction-size in pg_upgrade.  I think it's fine to default it to 1,000
> > > or something, but given how often I've had to fiddle with
> > > max_locks_per_transaction, I'm wondering if we might regret hard-coding it.
> >
> > Well, we could add a command-line switch to pg_upgrade, but I'm
> > unconvinced that it'd be worth the trouble.  I think a very large
> > fraction of users invoke pg_upgrade by means of packager-supplied
> > scripts that are unlikely to provide a way to pass through such
> > a switch.  I'm inclined to say let's leave it as-is until we get
> > some actual field requests for a switch.
>
> I've been importing our schemas and doing upgrade testing, and was
> surprised when a postgres backend was killed for OOM during pg_upgrade:
>
> Killed process 989302 (postgres) total-vm:5495648kB, anon-rss:5153292kB, ...
>
> Upgrading from v16 => v16 doesn't use nearly as much RAM.
>
> While tracking down the responsible commit, I reproduced the problem
> using a subset of tables; at 959b38d770, the backend process used
> ~650 MB RAM, and at its parent commit used at most ~120 MB.
>
> 959b38d770b Invent --transaction-size option for pg_restore.
>
> By changing RESTORE_TRANSACTION_SIZE to 100, backend RAM use goes to
> 180 MB during pg_upgrade, which is reasonable.
>
> With partitioning, we have a lot of tables, some of them wide (126
> partitioned tables, 8942 childs, total 1019315 columns).  I didn't track
> if certain parts of our schema contribute most to the high backend mem
> use, just that it's now 5x (while testing a subset) to 50x higher.

Do you think there is a way to anonymize the schema and share it?

> We'd surely prefer that the transaction size be configurable.

I think we can add an option to pg_upgrade.  But I wonder if there is
something else we can do.  It seems that restoring some objects is
much more expensive than restoring others.  It would be nice to
identify such cases and check which memory contexts are growing and
why.  It would be helpful if you could share your data schema, so we
could dig into it.

I can imagine we need to count some DDL commands in aspect of maximum
restore transaction size in a different way than others.  Also, we
probably need to change the default restore transaction size.

------
Regards,
Alexander Korotkov
Supabase



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Inline non-SQL SRFs using SupportRequestSimplify
Следующее
От: "Tristan Partin"
Дата:
Сообщение: Re: Building with meson on NixOS/nixpkgs