Re: [HACKERS] memory problems in copying large table to STDOUT

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: [HACKERS] memory problems in copying large table to STDOUT
Дата
Msg-id 13339.939494576@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: [HACKERS] memory problems in copying large table to STDOUT  (Martin Weinberg <weinberg@osprey.astro.umass.edu>)
Ответы Re: [HACKERS] memory problems in copying large table to STDOUT
Список pgsql-hackers
Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:
> Tom Lane wrote on Sat, 09 Oct 1999 11:54:34 EDT
>>> FATAL 1:  Memory exhausted in AllocSetAlloc()
>> 
>> Hmm.  What is the exact declaration of the table?
>> 
>> The only explanation I can think of offhand is that the output
>> conversion function for one of the column types is leaking memory...
>> copy.c itself looks to be pretty careful not to.

> The table def is:

> CREATE TABLE psc (
>     hemis        text,
>     date        date,
>     scan        int2,
>     id        int4,
>     ra        float4,
>       [ lots more fields of these same types ]

Hmm, nothing unusual there.  I made up a dummy table containing these
column types, filled it with 16 meg of junk data, and copied in and
out without observing any process memory usage growth at all, under
both current sources and 6.5.2.  I also used gdb to set a breakpoint
at AllocSetAlloc, and checked that the inner loop of the copy wasn't
allocating anything it didn't free.  So there's no obvious memory
leakage bug here.  (It'd be pretty surprising if there was, really,
for such commonly used data types.)

I'm now thinking that there must be either a problem specific to your
platform, or some heretofore unnoticed problem with copying from a
multi-segment (ie, multi-gigabyte) table.  I don't have enough disk
space to check the latter theory here...

Can you prepare a debugger backtrace showing what the backend is doing
when it gets the error?  If you're not familiar with gdb, it'd go
something like this:

1. Build & install postgres with debugging symbols enabled  ("make CUSTOM_COPT=-g all"). 
2. Start gdb on the postgres executable, eg  "gdb /usr/local/pgsql/bin/postgres".
3. Fire up the copy-out operation as usual.  (I assume this takes long  enough that you have plenty of time for the
nexttwo steps ;-))
 
4. Use ps or top to find out the process number of the backend handling  the session.
5. Attach to that process number in gdb:(gdb)    attach NNNN
6. Set a breakpoint at elog, and let the backend continue running:(gdb)    break elog(gdb)    continue
7. When the breakpoint is hit, get a backtrace:Breakpoint 1, elog ...(gdb)    bt  After copying & pasting the resulting
printout,you can "quit" to  get out of gdb.
 
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Martin Weinberg
Дата:
Сообщение: Re: [HACKERS] memory problems in copying large table to STDOUT
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: [HACKERS] Next release is 7.0(?)