Обсуждение: Unexpected memory usage for repeated inserts within plpgsql function
Hi Hackers,
I have been seeing memory usage increasing for a simple plpgsql function. Could you please take a look and check is it a bug?
The function definition is:
create function f() returns int as
$$DECLARE count int;
BEGIN
count := 1;
LOOP
count := count + 1;
begin
EXECUTE 'insert into test values(10)';
IF count > 1000000 THEN
EXIT;
END IF;
exception when others then
end;
END LOOP;
END$$
LANGUAGE plpgsql;
When I ran this function by "select f()", from top command, I could see memory usage (Resident memory) kept on increasing:
Then I used gdb to suspend the process each time insert was actually executed, and issued " (gdb) p MemoryContextStats(TopMemoryContext) ". From the server log, I saw only "SPI Proc" part of all the memory contexts was increasing:
My question: Is this problem as-designed?
Thank you for your time!!
Guangzhou
Вложения
On Thu, Jul 21, 2016 at 4:09 PM, happy times <guangzhouzhang@qq.com> wrote:
My question: Is this problem as-designed?
Actually problem is in exec_stmt_dynexecute function, We make a copy of the query string,
and before we free it, it thow an error from SPI_execute (because table does not exist)
And this is happening in infinite loop, so we are seeing memory leak.
exec_stmt_dynexecute
exec_stmt_dynexecute
{
....
--
/* copy it out of the temporary context before we clean up */
querystr = pstrdup(querystr);
}
Thank you. Yes when I correctly created the table (so avoiding insertion statement error), the memory usage went no higher than ~ 60MB. Is this a bug from your point of view? -- View this message in context: http://postgresql.nabble.com/Unexpected-memory-usage-for-repeated-inserts-within-plpgsql-function-tp5912833p5913022.html Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
On Fri, Jul 22, 2016 at 4:06 PM, Guangzhou Zhang <35514815@qq.com> wrote:
Thank you. Yes when I correctly created the table (so avoiding insertion
statement error), the memory usage went no higher than ~ 60MB.
Is this a bug from your point of view?
On other thread there are discussion going on about how to fix this issue. You can refer this thread.