Re: pg_dump / copy bugs with "big lines" ?
От | Daniel Verite |
---|---|
Тема | Re: pg_dump / copy bugs with "big lines" ? |
Дата | |
Msg-id | 28a1f376-e006-4ecf-93f5-133737652c5c@mm обсуждение исходный текст |
Ответ на | Re: pg_dump / copy bugs with "big lines" ? ("Daniel Verite" <daniel@manitou-mail.org>) |
Ответы |
Re: pg_dump / copy bugs with "big lines" ?
|
Список | pgsql-hackers |
Daniel Verite wrote: > # \copy bigtext2 from '/var/tmp/bigtext.sql' > ERROR: 54000: out of memory > DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 > more bytes. > CONTEXT: COPY bigtext2, line 1 > LOCATION: enlargeStringInfo, stringinfo.c:278 To go past that problem, I've tried tweaking the StringInfoData used for COPY FROM, like the original patch does in CopyOneRowTo. It turns out that it fails a bit later when trying to make a tuple from the big line, in heap_form_tuple(): tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + len); which fails because (HEAPTUPLESIZE + len) is again considered an invalid size, the size being 1468006476 in my test. At this point it feels like a dead end, at least for the idea that extending StringInfoData might suffice to enable COPYing such large rows. Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite
В списке pgsql-hackers по дате отправления: