Обсуждение: pgsql: Add hooks for session start and session end, take two
Add hooks for session start and session end, take two These hooks can be used in loadable modules. A simple test module is included. The first attempt was done with cd8ce3a but we lacked handling for NO_INSTALLCHECK in the MSVC scripts (problem solved afterwards by 431f1599) so the buildfarm got angry. This also fixes a couple of issues noticed upon review compared to the first attempt, so the code has slightly changed, resulting in a more simple test module. Author: Fabrízio de Royes Mello, Yugo Nagata Reviewed-by: Andrew Dunstan, Michael Paquier, Aleksandr Parfenov Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp Discussion: https://postgr.es/m/20190823042602.GB5275@paquier.xyz Branch ------ master Details ------- https://git.postgresql.org/pg/commitdiff/e788bd924c19e296bd34316e30e3ba1b68354e64 Modified Files -------------- src/backend/tcop/postgres.c | 6 + src/backend/utils/init/postinit.c | 6 + src/include/tcop/tcopprot.h | 7 + src/test/modules/Makefile | 1 + src/test/modules/test_session_hooks/.gitignore | 4 + src/test/modules/test_session_hooks/Makefile | 23 ++++ src/test/modules/test_session_hooks/README | 11 ++ .../expected/test_session_hooks.out | 37 ++++++ .../modules/test_session_hooks/session_hooks.conf | 2 + .../test_session_hooks/sql/test_session_hooks.sql | 19 +++ .../test_session_hooks/test_session_hooks.c | 146 +++++++++++++++++++++ 11 files changed, 262 insertions(+)
On Tue, Oct 01, 2019 at 03:16:06AM +0000, Michael Paquier wrote: > Add hooks for session start and session end, take two > > These hooks can be used in loadable modules. A simple test module is > included. > > The first attempt was done with cd8ce3a but we lacked handling for > NO_INSTALLCHECK in the MSVC scripts (problem solved afterwards by > 431f1599) so the buildfarm got angry. This also fixes a couple of > issues noticed upon review compared to the first attempt, so the code > has slightly changed, resulting in a more simple test module. So the buildfarm got again angry on that with crake and thorntail: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-10-01%2003%3A32%3A29 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2019-10-01%2003%3A51%3A57 Both machines are using force_parallel_mode = regress, and I can reproduce the module crash with that backtrace once I enforce the parameter: #2 0x00005609ce1f1454 in ExceptionalCondition (conditionName=conditionName@entry=0x5609ce25cdd0 !IsParallelWorker()", errorType=errorType@entry=0x5609ce24501d "FailedAssertion", fileName=fileName@entry=0x5609ce259bef "xact.c", lineNumber=lineNumber@entry=757) at assert.c:54 #3 0x00005609cde04410 in GetCurrentCommandId (used=used@entry=true) at xact.c:757 #4 0x00005609cdf3e620 in standard_ExecutorStart (queryDesc=0x5609cf072330, eflags=0) at execMain.c:238 #5 0x00005609cdf7ad71 in _SPI_pquery (tcount=0, fire_triggers=true, queryDesc=<optimized out>) at spi.c:2519 #6 _SPI_execute_plan (plan=<optimized out>, paramLI=<optimized out>, snapshot=<optimized out>, crosscheck_snapshot=<optimized out>, read_only=<optimized out>, fire_triggers=<optimized out>, tcount=<optimized out>) at spi.c:2297 #7 0x00005609cdf7b3ac in SPI_execute (src=0x5609cf06c050 "INSERT INTO session_hook_log (dbname, username, hook_at) VALUES ('contrib_regression', 'regress_sess_hook_usr2', 'END');", read_only=read_only@entry=false, tcount=tcount@entry=0) at spi.c:514 #8 0x00005609cdf7b3ea in SPI_exec (src=<optimized out>, tcount=tcount@entry=0) at spi.c:526 #9 0x00007feefe55b2ef in register_session_hook (hook_at=<optimized out>) at test_session_hooks.c:67 #10 0x00005609ce09a351 in shmem_exit (code=code@entry=0) at ipc.c:239 #11 0x00005609ce09a45d in proc_exit_prepare (code=code@entry=0) at ipc.c:194 #12 0x00005609ce09a502 in proc_exit (code=code@entry=0) at ipc.c:107 #13 0x00005609ce02c42f in StartBackgroundWorker () at bgworker.c:837 This indicates that it is possible to reach GetCurrentCommandId() for a parallel worker in this context. It is possible to get rid of the problem by enforcing the following in the tests so as we have no parallel plans: SET max_parallel_workers = 0; SET max_parallel_workers_per_gather = 0; Or just use SetConfigOption as per the attached which would be a bit cleaner, and both could be used as a temporary solution to cool down the buildfarm but that does not feel completely right to me in the long term. Any thoughts? -- Michael
Вложения
Michael Paquier <michael@paquier.xyz> writes: > On Tue, Oct 01, 2019 at 03:16:06AM +0000, Michael Paquier wrote: >> Add hooks for session start and session end, take two > So the buildfarm got again angry on that with crake and thorntail: > Both machines are using force_parallel_mode = regress, and I can > reproduce the module crash with that backtrace once I enforce the > parameter: > #2 0x00005609ce1f1454 in ExceptionalCondition > (conditionName=conditionName@entry=0x5609ce25cdd0 > !IsParallelWorker()", > errorType=errorType@entry=0x5609ce24501d "FailedAssertion", > fileName=fileName@entry=0x5609ce259bef "xact.c", > lineNumber=lineNumber@entry=757) at assert.c:54 > #3 0x00005609cde04410 in GetCurrentCommandId > (used=used@entry=true) at xact.c:757 > #4 0x00005609cdf3e620 in standard_ExecutorStart > (queryDesc=0x5609cf072330, eflags=0) at execMain.c:238 > #5 0x00005609cdf7ad71 in _SPI_pquery (tcount=0, > fire_triggers=true, queryDesc=<optimized out>) at spi.c:2519 > #6 _SPI_execute_plan (plan=<optimized out>, paramLI=<optimized > out>, snapshot=<optimized out>, crosscheck_snapshot=<optimized > out>, read_only=<optimized out>, > fire_triggers=<optimized out>, tcount=<optimized out>) at spi.c:2297 > #7 0x00005609cdf7b3ac in SPI_execute (src=0x5609cf06c050 "INSERT INTO > session_hook_log (dbname, username, hook_at) VALUES > ('contrib_regression', 'regress_sess_hook_usr2', 'END');", > read_only=read_only@entry=false, > tcount=tcount@entry=0) at spi.c:514 > #8 0x00005609cdf7b3ea in SPI_exec (src=<optimized out>, > tcount=tcount@entry=0) at spi.c:526 > #9 0x00007feefe55b2ef in register_session_hook > (hook_at=<optimized out>) at test_session_hooks.c:67 > #10 0x00005609ce09a351 in shmem_exit > (code=code@entry=0) at ipc.c:239 > #11 0x00005609ce09a45d in proc_exit_prepare > (code=code@entry=0) at ipc.c:194 > #12 0x00005609ce09a502 in proc_exit > (code=code@entry=0) at ipc.c:107 > #13 0x00005609ce02c42f in StartBackgroundWorker () at > bgworker.c:837 Uh, WHAT? The idea that you can launch a query after proc_exit has started is complete insanity. I hope this is just a poorly-thought-out test case, and not something that's inherent to this module. If there are not reasonable use-cases that don't need that, we should just revert the feature altogether, because it's nothing but a large caliber foot-gun. regards, tom lane
On Tue, Oct 01, 2019 at 01:52:46PM +0900, Michael Paquier wrote: > This indicates that it is possible to reach GetCurrentCommandId() for > a parallel worker in this context. It is possible to get rid of the > problem by enforcing the following in the tests so as we have no > parallel plans: > SET max_parallel_workers = 0; > SET max_parallel_workers_per_gather = 0; > Or just use SetConfigOption as per the attached which would be a bit > cleaner, and both could be used as a temporary solution to cool down > the buildfarm but that does not feel completely right to me in the > long term. > > Any thoughts? Actually, it makes little sense to allow parallel workers to log their activity in the module. So if there are no objections, I would like to fix that by checking after IsParallelWorker() in the hooks of the module. -- Michael
Вложения
On Tue, Oct 01, 2019 at 01:10:36AM -0400, Tom Lane wrote: > The idea that you can launch a query after proc_exit has started is > complete insanity. I hope this is just a poorly-thought-out test > case, and not something that's inherent to this module. If there > are not reasonable use-cases that don't need that, we should just > revert the feature altogether, because it's nothing but a large > caliber foot-gun. That was originally just part of the original test to prove the point of the session end hook where people wanted to be able to log some activity once the session is done with. -- Michael
Вложения
On Tue, Oct 01, 2019 at 02:22:45PM +0900, Michael Paquier wrote: > Actually, it makes little sense to allow parallel workers to log their > activity in the module. So if there are no objections, I would like > to fix that by checking after IsParallelWorker() in the hooks of the > module. Fixed with 002962dc, and crake has just turned back to green. -- Michael
Вложения
On Tue, Oct 01, 2019 at 04:13:20PM +0900, Michael Paquier wrote: > On Tue, Oct 01, 2019 at 02:22:45PM +0900, Michael Paquier wrote: > > Actually, it makes little sense to allow parallel workers to log their > > activity in the module. So if there are no objections, I would like > > to fix that by checking after IsParallelWorker() in the hooks of the > > module. > > Fixed with 002962dc, and crake has just turned back to green. Even with that, there is still one failure with Msys and fairywren: ALTER DATABASE ============== running regression test queries ============== test test_session_hooks ... FAILED (test process exited with exit code 2) 493 ms ============== shutting down postmaster ============== ====================== 1 of 1 tests failed. ====================== The differences that caused some tests to fail can be viewed in the file "C:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/modules/test_session_hooks/regression.diffs". A copy of the test summary that you see above is saved in the file "C:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/modules/test_session_hooks/regression.out". Unfortunately it is a bit hard to grab what the problem actually is. Andrew, could it be possible to get more information from this animal? -- Michael
Вложения
On 10/1/19 10:28 AM, Michael Paquier wrote: > On Tue, Oct 01, 2019 at 04:13:20PM +0900, Michael Paquier wrote: >> On Tue, Oct 01, 2019 at 02:22:45PM +0900, Michael Paquier wrote: >>> Actually, it makes little sense to allow parallel workers to log their >>> activity in the module. So if there are no objections, I would like >>> to fix that by checking after IsParallelWorker() in the hooks of the >>> module. >> Fixed with 002962dc, and crake has just turned back to green. > Even with that, there is still one failure with Msys and fairywren: > ALTER DATABASE > ============== running regression test queries ============== > test test_session_hooks ... FAILED (test process exited with > exit code 2) 493 ms > ============== shutting down postmaster ============== > > ====================== > 1 of 1 tests failed. > ====================== > > The differences that caused some tests to fail can be viewed in the > file > "C:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/modules/test_session_hooks/regression.diffs". > A copy of the test summary that you see above is saved in the file > "C:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/modules/test_session_hooks/regression.out". > > Unfortunately it is a bit hard to grab what the problem actually is. > Andrew, could it be possible to get more information from this animal? I'll fix up the logging. Meanwhile, the log is showing: \c :prevdb regress_sess_hook_usr1 \connect: FATAL: SSPI authentication failed for user "regress_sess_hook_usr1" That's not surprising given the hba and ident file contents. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Oct 01, 2019 at 03:24:21PM -0400, Andrew Dunstan wrote: > I'll fix up the logging. Meanwhile, the log is showing: > > \c :prevdb regress_sess_hook_usr1 > \connect: FATAL: SSPI authentication failed for user > "regress_sess_hook_usr1" > > That's not surprising given the hba and ident file contents. Thanks for the details of the logs! That makes sense, and we actually do not have other modules with NO_INSTALLCHECK which use \c to reconnect in a test. The attached patch should be able to fix the issue. Could you confirm? -- Michael
Вложения
Hi, On 2019-10-01 14:25:43 +0900, Michael Paquier wrote: > On Tue, Oct 01, 2019 at 01:10:36AM -0400, Tom Lane wrote: > > The idea that you can launch a query after proc_exit has started is > > complete insanity. I hope this is just a poorly-thought-out test > > case, and not something that's inherent to this module. If there > > are not reasonable use-cases that don't need that, we should just > > revert the feature altogether, because it's nothing but a large > > caliber foot-gun. > > That was originally just part of the original test to prove the point > of the session end hook where people wanted to be able to log some > activity once the session is done with. How's that countering Tom's concern? To me it seems pretty broken to run the session hook from within ShutdownPostgres(). What if that hook starts another transaction, for example? Or uses any of the other subsystems that might already have shut down at this point? The way this is implemented right now this cannot touch *any* subsystem that uses before_shmem_exit hooks, because they've all already been shutdown by the time ShutdownPostgres() is called. Case in point, this fails reliably for me if I force every query to be JIT compiled: PGOPTIONS='-cjit_above_cost=0' make check -C src/test/modules/test_session_hooks/ #0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:67 #1 0x00007f3e772a156c in __gthread_mutex_lock (__mutex=0x0) at /usr/include/x86_64-linux-gnu/c++/8/bits/gthr-default.h:748 #2 __gthread_recursive_mutex_lock (__mutex=0x0) at /usr/include/x86_64-linux-gnu/c++/8/bits/gthr-default.h:810 #3 std::recursive_mutex::lock (this=0x0) at /usr/include/c++/8/mutex:107 #4 std::lock_guard<std::recursive_mutex>::lock_guard (__m=..., this=<synthetic pointer>) at /usr/include/c++/8/bits/std_mutex.h:162 #5 llvm::orc::ExecutionSession::runSessionLocked<llvm::orc::ExecutionSession::allocateVModule()::{lambda()#1}>(llvm::orc::ExecutionSession::allocateVModule()::{lambda()#1}&&) (F=...,this=0x0) at /home/andres/src/llvm-project/llvm/include/llvm/ExecutionEngine/Orc/Core.h:786 #6 llvm::orc::ExecutionSession::allocateVModule (this=0x0) at /home/andres/src/llvm-project/llvm/include/llvm/ExecutionEngine/Orc/Core.h:808 #7 llvm::OrcCBindingsStack::addIRModule<llvm::orc::LegacyIRCompileLayer<llvm::orc::LegacyRTDyldObjectLinkingLayer, llvm::orc::SimpleCompiler>> ( this=this@entry=0x0, Layer=..., M=std::unique_ptr<class llvm::Module> = {...}, MemMgr=std::unique_ptr<class llvm::RuntimeDyld::MemoryManager>= {...}, ExternalResolver=ExternalResolver@entry=0x7f3e78223bd0 <llvm_resolve_symbol>, ExternalResolverCtx=0x0) at /home/andres/src/llvm-project/llvm/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:304 #8 0x00007f3e772a1935 in llvm::OrcCBindingsStack::addIRModuleEager (ExternalResolverCtx=0x0, ExternalResolver=0x7f3e78223bd0<llvm_resolve_symbol>, M=..., this=0x0) at /usr/include/c++/8/bits/move.h:74 #9 LLVMOrcAddEagerlyCompiledIR (JITStack=0x0, RetHandle=RetHandle@entry=0x7fff33127f78, Mod=0x1460fb0, SymbolResolver=SymbolResolver@entry=0x7f3e78223bd0 <llvm_resolve_symbol>, SymbolResolverCtx=SymbolResolverCtx@entry=0x0) at /home/andres/src/llvm-project/llvm/lib/ExecutionEngine/Orc/OrcCBindings.cpp:77 #10 0x00007f3e78222d84 in llvm_compile_module (context=0x13aaea8) at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit.c:553 #11 llvm_get_function (context=0x13aaea8, funcname=0x14a2370 "evalexpr_1_0") at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit.c:262 #12 0x00007f3e7822be2e in ExecRunCompiledExpr (state=0x14a1d98, econtext=0x14a19c0, isNull=0x0) at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit_expr.c:2434 #13 0x00000000006b2529 in ExecEvalExprNoReturn (econtext=0x14a19c0, state=0x14a1d98) at /home/andres/src/postgresql/src/include/executor/executor.h:356 #14 ExecEvalExprNoReturnSwitchContext (econtext=0x14a19c0, state=0x14a1d98) at /home/andres/src/postgresql/src/include/executor/executor.h:356 #15 ExecProject (projInfo=0x14a1d90) at /home/andres/src/postgresql/src/include/executor/executor.h:388 #16 ExecResult (pstate=<optimized out>) at /home/andres/src/postgresql/src/backend/executor/nodeResult.c:136 #17 0x00000000006afe04 in ExecProcNode (node=0x14a18b0) at /home/andres/src/postgresql/src/include/executor/executor.h:240 #18 ExecModifyTable (pstate=0x14a1638) at /home/andres/src/postgresql/src/backend/executor/nodeModifyTable.c:2072 #19 0x0000000000687eec in ExecProcNode (node=0x14a1638) at /home/andres/src/postgresql/src/include/executor/executor.h:240 #20 ExecutePlan (execute_once=<optimized out>, dest=0xb1a9e0 <spi_printtupDR>, direction=<optimized out>, numberTuples=0,sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x14a1638, estate=0x14a12c8) at /home/andres/src/postgresql/src/backend/executor/execMain.c:1646 #21 standard_ExecutorRun (queryDesc=0x1432db0, direction=<optimized out>, count=0, execute_once=<optimized out>) at /home/andres/src/postgresql/src/backend/executor/execMain.c:364 #22 0x00000000006bfe2e in _SPI_pquery (tcount=0, fire_triggers=true, queryDesc=<optimized out>) at /home/andres/src/postgresql/src/backend/executor/spi.c:2521 #23 _SPI_execute_plan (plan=<optimized out>, paramLI=<optimized out>, snapshot=<optimized out>, crosscheck_snapshot=<optimizedout>, read_only=<optimized out>, fire_triggers=<optimized out>, tcount=<optimized out>) at /home/andres/src/postgresql/src/backend/executor/spi.c:2297 #24 0x00000000006c1ee6 in SPI_execute (tcount=tcount@entry=0, read_only=false, src=0x1433648 "INSERT INTO session_hook_log (dbname, username, hook_at) VALUES ('contrib_regression', 'regress_sess_hook_usr2','END');") at /home/andres/src/postgresql/src/backend/executor/spi.c:514 #25 SPI_exec (src=0x1433648 "INSERT INTO session_hook_log (dbname, username, hook_at) VALUES ('contrib_regression', 'regress_sess_hook_usr2','END');", tcount=tcount@entry=0) at /home/andres/src/postgresql/src/backend/executor/spi.c:526 #26 0x00007f3e843cdf6f in register_session_hook (hook_at=<optimized out>) at /home/andres/src/postgresql/src/test/modules/test_session_hooks/test_session_hooks.c:68 #27 0x00000000007e5d65 in shmem_exit (code=code@entry=0) at /home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:239 #28 0x00000000007e5e6f in proc_exit_prepare (code=code@entry=0) at /home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:194 #29 0x00000000007e5f02 in proc_exit (code=code@entry=0) at /home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:107 #30 0x000000000080bb35 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x13a3fb0, dbname=<optimized out>, username=<optimizedout>) at /home/andres/src/postgresql/src/backend/tcop/postgres.c:4470 #31 0x0000000000785436 in BackendRun (port=0x139d360, port=0x139d360) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4459 #32 BackendStartup (port=0x139d360) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4150 #33 ServerLoop () at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1718 #34 0x0000000000788307 in PostmasterMain (argc=argc@entry=8, argv=argv@entry=0x1372ed0) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1391 #35 0x00000000004e6a54 in main (argc=8, argv=0x1372ed0) at /home/andres/src/postgresql/src/backend/main/main.c:210 The reason for that is simply that at that point llvmjit.c's own shutdown hook has already shutdown parts of the JIT subsystem (which needs to flush profiling information to disk, for profiling to be useful). Greetings, Andres Freund
On Tue, Oct 01, 2019 at 05:02:28PM -0700, Andres Freund wrote: > The reason for that is simply that at that point llvmjit.c's own > shutdown hook has already shutdown parts of the JIT subsystem (which > needs to flush profiling information to disk, for profiling to be > useful). Hmm. I missed the actual point. The current location for the session end hook has been chosen because we are sure that any transaction has been aborted properly, and we'd still be limited with a hook in proc_exit_prepare() because of that same argument. I am just going to revert the patch. -- Michael
Вложения
On Wed, Oct 2, 2019 at 9:26 AM Michael Paquier <michael@paquier.xyz> wrote: > > On Tue, Oct 01, 2019 at 05:02:28PM -0700, Andres Freund wrote: > > The reason for that is simply that at that point llvmjit.c's own > > shutdown hook has already shutdown parts of the JIT subsystem (which > > needs to flush profiling information to disk, for profiling to be > > useful). > > Hmm. I missed the actual point. The current location for the session > end hook has been chosen because we are sure that any transaction has > been aborted properly, and we'd still be limited with a hook in > proc_exit_prepare() because of that same argument. I am just going to > revert the patch. If only session end hook is problematic, you will commit session start hook again? Regards, -- Fujii Masao
On Wed, Oct 02, 2019 at 01:27:50PM +0900, Fujii Masao wrote: > If only session end hook is problematic, you will commit session start > hook again? Sure, it would be possible to cut the apple in half here. Now my understanding was that both hooks were a set. What do people think? Note that to make the stuff complete it is still required to pass down --create-user to REGRESS_OPTS of the test module's Makefile to avoid regression test failures with Windows machines because of unmatching HBA entries. -- Michael
Вложения
On 2019-Oct-02, Michael Paquier wrote: > On Wed, Oct 02, 2019 at 01:27:50PM +0900, Fujii Masao wrote: > > If only session end hook is problematic, you will commit session start > > hook again? > > Sure, it would be possible to cut the apple in half here. Now my > understanding was that both hooks were a set. What do people think? I think that having just session start is still useful, even if I encourage you to get session end covered, because yes they do form a set :-) I don't think it's completely *impossible* to get session end to work, only it needs to be run prior to any subsystem shutdown (I haven't actually looked at the code, mind). Of course, it'll need to be clear that it might not run in certain cases. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Oct 2, 2019 at 10:08 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > > On 2019-Oct-02, Michael Paquier wrote: > > > On Wed, Oct 02, 2019 at 01:27:50PM +0900, Fujii Masao wrote: > > > If only session end hook is problematic, you will commit session start > > > hook again? > > > > Sure, it would be possible to cut the apple in half here. Now my > > understanding was that both hooks were a set. What do people think? > > I think that having just session start is still useful +1 Regarding session end hook, you can do the almost same thing as that hook by calling on_shmem_exit(), before_shmem_exit() or on_proc_exit() in other hook like session start hook. This approach also has the same issue discussed upthread, though. Anyway, I'm not sure if session end hook is "actually" necessary. Regards, -- Fujii Masao
Hi, On 2019-10-02 13:27:50 +0900, Fujii Masao wrote: > On Wed, Oct 2, 2019 at 9:26 AM Michael Paquier <michael@paquier.xyz> wrote: > > > > On Tue, Oct 01, 2019 at 05:02:28PM -0700, Andres Freund wrote: > > > The reason for that is simply that at that point llvmjit.c's own > > > shutdown hook has already shutdown parts of the JIT subsystem (which > > > needs to flush profiling information to disk, for profiling to be > > > useful). > > > > Hmm. I missed the actual point. The current location for the session > > end hook has been chosen because we are sure that any transaction has > > been aborted properly, and we'd still be limited with a hook in > > proc_exit_prepare() because of that same argument. I am just going to > > revert the patch. > > If only session end hook is problematic, you will commit session start > hook again? My impression is that these patches first need considerably more actual design and code review. This wasn't a subtle issue or anything. Greetings, Andres Freund