Обсуждение: Large data field causes a backend crash.
Robert Bruccoleri (bruc@stone.congen.com) reports a bug with a severity of 3
The lower the number the more severe it is.
Short Description
Large data field causes a backend crash.
Long Description
In testing TOAST in PostgreSQL 7.1beta4, I was curious to see
how big a field could actually be handled. I created a simple table
with one text field, seq, and tried using the COPY command to
fill it with a value of length 194325306 characters. It crashed
the system with the following messages:
test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]", File:
"mcxt.c",Line: 478)
!(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Server process (pid 2109589) exited with status 134 at Mon Feb
5 15:20:42 2001
Terminating any active server processes...
The Data Base System is in recovery mode
----------------------------------------------------------------------
I have tried a field of length 52000000 characters, and that worked
fine (very impressive!).
The system should gracefully exit from an oversize record.
Sample Code
No file was uploaded with this report
pgsql-bugs@postgresql.org writes:
> Large data field causes a backend crash.
> test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
> TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]", File:
"mcxt.c",Line: 478)
> !(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
Yeah, this should probably be treated as a plain elog(ERROR) now,
instead of an Assert failure ...
regards, tom lane
> pgsql-bugs@postgresql.org writes:
> > Large data field causes a backend crash.
>
> > test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
> > TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]",
File:"mcxt.c", Line: 478)
> > !(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
>
> Yeah, this should probably be treated as a plain elog(ERROR) now,
> instead of an Assert failure ...
Tom, can you give a little more detail. Why does the copy fail here?
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Never mind. I read the commit message.
> pgsql-bugs@postgresql.org writes:
> > Large data field causes a backend crash.
>
> > test=# copy test from '/stf/bruc/RnD/genscan/foo.test';
> > TRAP: Too Large Allocation Request("!(0 < (size) && (size) <= ((Size) 0xfffffff)):size=268435456 [0x10000000]",
File:"mcxt.c", Line: 478)
> > !(0 < (size) && (size) <= ((Size) 0xfffffff)) (0) [No such file or directory]
>
> Yeah, this should probably be treated as a plain elog(ERROR) now,
> instead of an Assert failure ...
>
> regards, tom lane
>
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026