Обсуждение: pg_dump --binary-upgrade out of memory

Поиск
Список
Период
Сортировка

pg_dump --binary-upgrade out of memory

От
Антон Глушаков
Дата:
Hi.
I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.
The utility crashed with an error in out of memory.
After researching a bit I found that this happens at the moment of export schema  with pg_dump.

Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.
Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))
I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata

$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);

$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp
and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)

Perhaps this is a bug? How can I perform an upgrade?

Re: pg_dump --binary-upgrade out of memory

От
Tom Lane
Дата:
=?UTF-8?B?0JDQvdGC0L7QvSDQk9C70YPRiNCw0LrQvtCy?= <a.glushakov86@gmail.com> writes:
> I encountered the problem of not being able to upgrade my instance (14->15)
> via pg_upgrade.
> The utility crashed with an error in out of memory.
> After researching a bit I found that this happens at the moment of export
> schema  with pg_dump.
> Then I tried to manually perform a dump schema with the parameter
> --binary-upgrade option and also got an out of memory.
> Digging a little deeper, I discovered quite a large number of blob objects
> in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk
> rows))

Yeah, dumping a database with a lot of blobs is a known pain point.

I have some patches in progress that intend to make that better [1],
but they're meant for v17 and I'm not sure if you could get them to
work in v15.  In the meantime, trying to run pg_dump on a beefier
machine might be your best option.

            regards, tom lane

[1] https://commitfest.postgresql.org/47/4713/



AW: pg_dump --binary-upgrade out of memory

От
"Dischner, Anton"
Дата:

Hi,

 

-Hi.

-I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.

-The utility crashed with an error in out of memory.-

-After researching a bit I found that this happens at the moment of export schema  with pg_dump.

 

-Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.

-Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))

-I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata

 

-$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);

 

-$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp

-and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)

 

-Perhaps this is a bug? How can I perform an upgrade?

 

a quick and dirty solution might be an additional temporary swap space like this https://en.euro-linux.com/blog/creating-a-swap-file-or-how-to-deal-with-a-temporary-memory-shortage/

 

best,

 

Anton

 

 

Re: pg_dump --binary-upgrade out of memory

От
Антон Глушаков
Дата:
Thanks for the answers.
Increasing the RAM helped.
Previously, I estimated that processing 1 million rows from pg_largeobject_metadata with pg_dump requires about 750MB of memory (data from ps - RSS, AlmaLinux 8).
But the running time of the process is frustrating. It took about 40 minutes.
Really hope for fixes in v17

ср, 14 февр. 2024 г. в 10:11, Dischner, Anton <Anton.Dischner@med.uni-muenchen.de>:

Hi,

 

-Hi.

-I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.

-The utility crashed with an error in out of memory.-

-After researching a bit I found that this happens at the moment of export schema  with pg_dump.

 

-Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.

-Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))

-I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata

 

-$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);

 

-$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp

-and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)

 

-Perhaps this is a bug? How can I perform an upgrade?

 

a quick and dirty solution might be an additional temporary swap space like this https://en.euro-linux.com/blog/creating-a-swap-file-or-how-to-deal-with-a-temporary-memory-shortage/

 

best,

 

Anton