Re: [HACKERS] Error: dsa_area could not attach to a segment thathas been freed
От | Gaddam Sai Ram |
---|---|
Тема | Re: [HACKERS] Error: dsa_area could not attach to a segment thathas been freed |
Дата | |
Msg-id | 15eeda33971.d76221bf3054.3846842419807854411@zohocorp.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Error: dsa_area could not attach to a segment that hasbeen freed (Thomas Munro <thomas.munro@enterprisedb.com>) |
Список | pgsql-hackers |
Hi Thomas,
Thanks for cautioning us about possible memory leaks(during error cases) incase of long-lived DSA segements.
Actually we are following an approach to avoid this DSA memory leaks. Let me explain our implementation and please validate and correct us in-case we miss anything.
Implementation:
Basically we have to put our index data into memory (Index Column Value Vs Ctid) which we get in aminsert callback function.
Coming to the implementation, in aminsert Callback function,
- We Switch to CurTransactionContext
- Cache the DMLs of a transaction into dlist(global per process)
- Even if different clients work parallel, it won't be a problem because every client gets one dlist in separate process and it'll have it's own CurTransactionContext
- We have registered transaction callback (using RegisterXactCallback() function). And during event pre-commit(XACT_EVENT_PRE_COMMIT), we populate all the transaction specific DMLs (from dlist) into our in-memory index(DSA) obviously inside PG_TRY/PG_CATCH block.
- In case we got some errors(because of dsa_allocate() or something else) while processing dlist(while populating in-memory index), we cleanup the DSA memory in PG_CATCH block that is allocated/used till that point.
- During other error cases, typically transactions gets aborted and PRE_COMMIT event is not called and hence we don't touch DSA at that time. Hence no need to bother about leaks.
- Even sub transaction case is handled with sub transaction callbacks.
- CurTransactionContext(dlist basically) is automatically cleared after that particular transaction.
I want to know if this approach is good and works well in all cases. Kindly provide your feedback on this.
Regards
G. Sai Ram
---- On Wed, 20 Sep 2017 14:25:43 +0530 Thomas Munro <thomas.munro@enterprisedb.com> wrote ----
On Wed, Sep 20, 2017 at 6:14 PM, Gaddam Sai Ram<gaddamsairam.n@zohocorp.com> wrote:> Thank you very much! That fixed my issue! :)> I was in an assumption that pinning the area will increase its lifetime but> yeah after taking memory context into consideration its working fine!So far the success rate in confusing people who first try to makelong-lived DSA areas and DSM segments is 100%. Basically, this is alldesigned to ensure automatic cleanup of resources in short-livedscopes.Good luck for your graph project. I think you're going to have toexpend a lot of energy trying to avoid memory leaks if your DSA livesas long as the database cluster, since error paths won't automaticallyfree any memory you allocated in it. Right now I don't have anyparticularly good ideas for mechanisms to deal with that. PostgreSQLC has exception-like error handling, but doesn't (and probably can't)have a language feature like scoped destructors from C++. IMHOexceptions need either destructors or garbage collection to keep yousane. There is a kind of garbage collection for palloc'd memory andalso for other resources like file handles, but if you're using a biglong lived DSA area you have nothing like that. You can usePG_TRY/PG_CATCH very carefully to clean up, or (probably better) youcan try to make sure that all your interaction with shared memory isno-throw (note that that means using dsa_allocate_extended(x,DSA_ALLOC_NO_OOM), because dsa_allocate itself can raise errors). Thefirst thing I'd try would probably be to keep all shmem-allocatingcode in as few routines as possible, and use only no-throw operationsin the 'critical' regions of them, and maybe look into some kind ofundo log of things to free or undo in case of error to managemulti-allocation operations if that turned out to be necessary.--Thomas Munro
В списке pgsql-hackers по дате отправления:
Предыдущее
От: Andres FreundДата:
Сообщение: Re: [HACKERS] Binary search in fmgr_isbuiltin() is a bottleneck.