Обсуждение: BUG #13446: pg_dump fails with large tuples
The following bug has been logged on the website:
Bug reference: 13446
Logged by: CPT
Email address: cpt@novozymes.com
PostgreSQL version: 9.3.5
Operating system: Linux, Ubuntu 12, 64-bit
Description:
It looks to me like pg_dump is limited to 1GB per row as a textual
representation.
# create table stringtest (test text);
CREATE TABLE
# insert into stringtest select repeat('A', (1024*2014*510));
INSERT 1
# alter table stringtest add test2 text;
ALTER TABLE
# update stringtest set test2 = test;
UPDATE 1
# \q
$
So far so good.... Now let's try to back this up using pg_dump:
$ pg_dump ... -t stringtest
...
pg_dump: Dumping the contents of table "stringtest" failed: PQgetResult()
failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1051791361 bytes by
1051791360 more bytes.
pg_dump: The command was: COPY public.stringtest (test, test2) TO stdout;
This message then shows up in the server logs. It looks like maybe pg_dump
is limited to exactly 1GB textual representation?
On Tue, Jun 16, 2015 at 8:20 PM, <cpt@novozymes.com> wrote: > This message then shows up in the server logs. It looks like maybe pg_dump > is limited to exactly 1GB textual representation? Yep. That's a known limitation of COPY and palloc() in general. -- Michael
El mié, 17-06-2015 a las 13:08 +0900, Michael Paquier escribió: > On Tue, Jun 16, 2015 at 8:20 PM, <cpt@novozymes.com> wrote: > > This message then shows up in the server logs. It looks like maybe pg_dump > > is limited to exactly 1GB textual representation? > > Yep. That's a known limitation of COPY and palloc() in general. But could we use pg_dump with -Fc or -Fd in this case ? or the only way to backup this table is using physical backup ? Thanks > -- > Michael > >