Re: Curing plpgsql's memory leaks for statement-lifespan values
От | Amit Kapila |
---|---|
Тема | Re: Curing plpgsql's memory leaks for statement-lifespan values |
Дата | |
Msg-id | CAA4eK1JRdXqfaA25vJkVcjoi0h5YzH9Mi1brTO2DgDO6SVE26g@mail.gmail.com обсуждение исходный текст |
Ответ на | Curing plpgsql's memory leaks for statement-lifespan values (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Curing plpgsql's memory leaks for statement-lifespan values
(Dilip Kumar <dilipbalaut@gmail.com>)
Re: Curing plpgsql's memory leaks for statement-lifespan values (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Fri, Jul 22, 2016 at 4:32 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > In > https://www.postgresql.org/message-id/tencent_5C738ECA65BAD6861AA43E8F@qq.com > it was pointed out that you could get an intra-function-call memory leak > from something like > > LOOP > BEGIN > EXECUTE 'bogus command'; > EXCEPTION WHEN OTHERS THEN > END; > END LOOP; > > The problem is that exec_stmt_dynexecute() loses control to the error > thrown from the bogus command, and therefore leaks its "querystr" local > variable --- which is not much of a leak, but it adds up if the loop > iterates enough times. There are similar problems in many other places in > plpgsql. Basically the issue is that while running a plpgsql function, > CurrentMemoryContext points to a function-lifespan context (the same as > the SPI procCxt the function is using). We also store things such as > values of the function's variables there, so just resetting that context > is not an option. plpgsql does have an expression-evaluation-lifespan > context for short-lived stuff, but anything that needs to live for more > or less the duration of a statement is put into the procedure-lifespan > context, where it risks becoming a leak. (That design was fine > originally, because any error would result in abandoning function > execution and thereby cleaning up that context. But once we invented > plpgsql exception blocks, it's not so good.) > Wouldn't it be better, if each nested sub-block (which is having an exception) has a separate "SPI Proc", "SPI Exec" contexts which would be destroyed at sub-block end (either commit or rollback)? I think one difficulty could be that we need some way to propagate the information that is required by outer blocks, if there exists any such information. > One way we could resolve the problem is to require all plpgsql code to > use PG_TRY/PG_CATCH blocks to ensure that statement-lifespan variables > are explicitly released. That's undesirable on pretty much every front > though: it'd be notationally ugly, prone to omissions, and not very > speedy. > > Another answer is to invent a third per-function memory context intended > to hold statement-lifespan variables. > This sounds better than spreading PG_TRY/PG_CATCH everywhere. I think if this allocation would have been done in executor context "SPI Exec", then it wouldn't have leaked. One way could have been that by default all exec_stmt* functions execute in "SPI Exec" context and we then switch to "SPI Proc" for the memory that is required for longer duration. I think that might not be good, if we have to switch at many places, but OTOH the same will be required for a new statement-level execution context as well. In short, why do you think it is better to create a new context rather than using "SPI Exec"? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления:
Следующее
От: Andrew BorodinДата:
Сообщение: Re: Re: GiST optimizing memmoves in gistplacetopage for fixed-size updates [PoC]