Re: Query executed during pg_dump leads to excessive memory usage

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Query executed during pg_dump leads to excessive memory usage
Дата
Msg-id 1566986.1632079266@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Query executed during pg_dump leads to excessive memory usage  (Ulf Lohbrügge <ulf.lohbruegge@gmail.com>)
Список pgsql-performance
=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <ulf.lohbruegge@gmail.com> writes:
> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @
> db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in
> pg_class.

Ouch.

> Using pg_dump on that database leads to excessive memory usage
> and sometimes even a kill by signal 9:

> 2021-09-18 16:51:24 UTC::@:[29787]:LOG:  Aurora Runtime process (PID 29794)
> was terminated by signal 9: Killed

For the record, Aurora isn't Postgres.  It's a heavily-modified fork,
with (I imagine) different performance bottlenecks.  Likely you
should be asking Amazon support about this before the PG community.

Having said that ...

> The high number of rows in pg_class result from more than ~550 schemata,
> each containing more than 600 tables. It's part of a multi tenant setup
> where each tenant lives in its own schema.

... you might have some luck dumping each schema separately, or at least
in small groups, using pg_dump's --schema switch.

> Is there anything I can do to improve that situation? Next thing that comes
> to my mind is to distribute those ~550 schemata over 5 to 6 databases in
> one database cluster instead of having one single database.

Yeah, you definitely don't want to have this many tables in one
database, especially not on a platform that's going to be chary
of memory.

            regards, tom lane



В списке pgsql-performance по дате отправления:

Предыдущее
От: "Gogala, Mladen"
Дата:
Сообщение: Re: Query executed during pg_dump leads to excessive memory usage
Следующее
От: Kirill
Дата:
Сообщение: multi-tenant queries select wrong index