Concern about memory management with SRFs

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Concern about memory management with SRFs
Дата
Msg-id 22310.1030576125@sss.pgh.pa.us
обсуждение исходный текст
Список pgsql-hackers
I've been looking at the sample SRFs (show_all_settings etc) and am not
happy about the way memory management is done.  As the code is currently
set up, the functions essentially assume that they are executed in a
context that will never be reset until they're done returning tuples.
(This is true because tupledescs and so on are blithely constructed in
CurrentMemoryContext during the first call.)

This approach means that SRFs cannot afford to leak any memory per-call.
If they do, and the result set is large, they will run the backend out
of memory.  I don't think that's acceptable.

The reason that the code fails to crash is that nodeFunctionscan.c
doesn't do a ResetExprContext(econtext) in the loop that collects rows
from the function and stashes them in the tuplestore.  But I think it
must do so in the long run, and so it would be better to get this right
the first time.

I think we should document that any memory that is allocated in the
first call for use in subsequent calls must come from the memory context
saved in FuncCallContext (and let's choose a more meaningful name than
fmctx, please).  This would mean adding code like
oldcontext = MemoryContextSwitchTo(funcctx->fmctx);
...
MemoryContextSwitchTo(oldcontext);

around the setup code that follows SRF_FIRSTCALL_INIT.  Then it would be
safe for nodeFunctionscan.c to do a reset before each function call.

Comments?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Joe Conway
Дата:
Сообщение: Re: Timetable for 7.3 beta
Следующее
От: Robert Treat
Дата:
Сообщение: Re: [SQL] LIMIT 1 FOR UPDATE or FOR UPDATE LIMIT 1?