Обсуждение: Upgrade from 11.3 to 13.1 failed with out of memory

Поиск
Список
Период
Сортировка

Upgrade from 11.3 to 13.1 failed with out of memory

От
Mihalidesová Jana
Дата:

Open

Hi,

 

I have aproximetly 560GB large database and try to upgrade it from 11.3 to 13.1. I’ve successfully upgraded dev,test and ref environment but on the production pg_dump failed with out of memory. Yes, of course, the dev,test and ref are much much smaller then production database.

We are using OID data type so there’s a lot of large objects. pg_largeobject it’s 59GB large.

The upgrade process fail during the pg_dump schemas_only so I’m confused why it’s not enough 35GB RAM which is free on the server when there’s no data. When I tried to run same pg_dump command by hand as during upgrade it fails on line pg_dump: reading large objects.

 

Creating dump of global objects                             "/pgsql/bin/13.1_XXXX/bin/pg_dumpall" --host /pgsql/data/XXXX --port 50432 --username XXXXXX --globals-only --quote-all-identifiers --binary-upgrade --verbose -f pg_upgrade_dump_globals.sql >> "pg_upgrade_utility.log" 2>&1

ok

Creating dump of database schemas

"/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username XXXXXX --schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1

 

*failure*

There were problems executing ""/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username pgpnip --schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1"

 

 

Do you have any idea how to upgrade the database? This is my upgrade command:

 

/pgsql/bin/13.1_XXXX/bin/pg_upgrade -k -b /pgsql/bin/11.3_XXXX/bin -B /pgsql/bin/13.1_XXXX/bin -d /pgsql/data/XXXX -D /pgsql/data/XXXX/XXXX.new

 

Thank you for any ideas.

 

Best regards,

 

Jana Mihalidesova

Database Administrator

 

CETIN a.s.

Českomoravská 2510/19, 190 00 Praha 9

m: +420 603 419 862    t: +420 238 465 074

jana.mihalidesova@cetin.cz

 

Re: Upgrade from 11.3 to 13.1 failed with out of memory

От
Magnus Hagander
Дата:
On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana
<jana.mihalidesova@cetin.cz> wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to 13.1. I’ve successfully upgraded dev,test
andref environment but on the production pg_dump failed with out of memory. Yes, of course, the dev,test and ref are
muchmuch smaller then production database. 
>
> We are using OID data type so there’s a lot of large objects. pg_largeobject it’s 59GB large.
>
> The upgrade process fail during the pg_dump schemas_only so I’m confused why it’s not enough 35GB RAM which is free
onthe server when there’s no data. When I tried to run same pg_dump command by hand as during upgrade it fails on line
pg_dump:reading large objects. 
>
>
>
> Creating dump of global objects                             "/pgsql/bin/13.1_XXXX/bin/pg_dumpall" --host
/pgsql/data/XXXX--port 50432 --username XXXXXX --globals-only --quote-all-identifiers --binary-upgrade --verbose -f
pg_upgrade_dump_globals.sql>> "pg_upgrade_utility.log" 2>&1 
>
> ok
>
> Creating dump of database schemas
>
> "/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username XXXXXX --schema-only
--quote-all-identifiers--binary-upgrade --format=custom --verbose --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX'
>>"pg_upgrade_dump_16384.log" 2>&1 
>
>
>
> *failure*
>
> There were problems executing ""/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username
pgpnip--schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose
--file="pg_upgrade_dump_16384.custom"'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1" 
>
>
>
>
>
> Do you have any idea how to upgrade the database? This is my upgrade command:
>
>
>
> /pgsql/bin/13.1_XXXX/bin/pg_upgrade -k -b /pgsql/bin/11.3_XXXX/bin -B /pgsql/bin/13.1_XXXX/bin -d /pgsql/data/XXXX -D
/pgsql/data/XXXX/XXXX.new
>


This is unfortunately a known limitation in pg_dump (and therefor by
proxy it becomes a problem with pg_upgrade) when you have many large
objects. It doesn't really matter how big they are, it matters how
*many* they are. It takes a long time and uses crazy amounts of
memory, but that's unfortunately where it's at. You'd have the same
problem with a plain dump/reload as well, not just the "binary upgrade
mode".

There's been some recent work on trying to find a remedy for this, but
nothing is available at this point. You'll need to either trim the
number of objects if you can (by maybe manually dumping them out to
files before the restore and then reloading them back in later), or
just add more memory/swap to the machine.

Long term you should probably consider switching to using bytea
columns when you have that many objects.

--
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/



RE: Upgrade from 11.3 to 13.1 failed with out of memory

От
Mihalidesová Jana
Дата:
Open

Hi,

Thanks a lot for information.

Best regards,
 Jana

-----Original Message-----
From: Magnus Hagander <magnus@hagander.net> 
Sent: Tuesday, April 6, 2021 3:23 PM
To: Mihalidesová Jana <jana.mihalidesova@cetin.cz>
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Upgrade from 11.3 to 13.1 failed with out of memory

On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana <jana.mihalidesova@cetin.cz> wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to 13.1. I’ve successfully upgraded dev,test
andref environment but on the production pg_dump failed with out of memory. Yes, of course, the dev,test and ref are
muchmuch smaller then production database.
 
>
> We are using OID data type so there’s a lot of large objects. pg_largeobject it’s 59GB large.
>
> The upgrade process fail during the pg_dump schemas_only so I’m confused why it’s not enough 35GB RAM which is free
onthe server when there’s no data. When I tried to run same pg_dump command by hand as during upgrade it fails on line
pg_dump:reading large objects.
 
>
>
>
> Creating dump of global objects                             "/pgsql/bin/13.1_XXXX/bin/pg_dumpall" --host
/pgsql/data/XXXX--port 50432 --username XXXXXX --globals-only --quote-all-identifiers --binary-upgrade --verbose -f
pg_upgrade_dump_globals.sql>> "pg_upgrade_utility.log" 2>&1
 
>
> ok
>
> Creating dump of database schemas
>
> "/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 
> 50432 --username XXXXXX --schema-only --quote-all-identifiers 
> --binary-upgrade --format=custom --verbose 
> --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX' >> 
> "pg_upgrade_dump_16384.log" 2>&1
>
>
>
> *failure*
>
> There were problems executing ""/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username
pgpnip--schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose
--file="pg_upgrade_dump_16384.custom"'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1"
 
>
>
>
>
>
> Do you have any idea how to upgrade the database? This is my upgrade command:
>
>
>
> /pgsql/bin/13.1_XXXX/bin/pg_upgrade -k -b /pgsql/bin/11.3_XXXX/bin -B 
> /pgsql/bin/13.1_XXXX/bin -d /pgsql/data/XXXX -D 
> /pgsql/data/XXXX/XXXX.new
>


This is unfortunately a known limitation in pg_dump (and therefor by proxy it becomes a problem with pg_upgrade) when
youhave many large objects. It doesn't really matter how big they are, it matters how
 
*many* they are. It takes a long time and uses crazy amounts of memory, but that's unfortunately where it's at. You'd
havethe same problem with a plain dump/reload as well, not just the "binary upgrade mode".
 

There's been some recent work on trying to find a remedy for this, but nothing is available at this point. You'll need
toeither trim the number of objects if you can (by maybe manually dumping them out to files before the restore and then
reloadingthem back in later), or just add more memory/swap to the machine.
 

Long term you should probably consider switching to using bytea columns when you have that many objects.

--
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/