Обсуждение: Don't clean up LLVM state when exiting in a bad way

Поиск
Список
Период
Сортировка

Don't clean up LLVM state when exiting in a bad way

От
Jelte Fennema
Дата:
Hi,

I ran into some segfaults when using Postgres that was compiled with LLVM 7. According to the backtraces these crashes happened during the call to llvm_shutdown, during cleanup after another out of memory condition. It seems that calls to LLVMOrcDisposeInstance, can crash (at least on LLVM 7) when LLVM is left in bad state. I attached the relevant part of the stacktrace to this email.

With the attached patch these segfaults went away. The patch turns llvm_shutdown into a no-op whenever the backend is exiting with an error. Based on my understanding of the code this should be totally fine. No memory should be leaked, since all memory will be cleaned up anyway once the backend exits shortly after. The only reason this cleanup code even seems to exist at all is to get useful LLVM profiling data. To me it seems be acceptable if the profiling data is incorrect/missing when the backend exits with an error.  

Jelte
Вложения

Re: Don't clean up LLVM state when exiting in a bad way

От
Zhihong Yu
Дата:


On Wed, Aug 18, 2021 at 8:01 AM Jelte Fennema <Jelte.Fennema@microsoft.com> wrote:
Hi,

I ran into some segfaults when using Postgres that was compiled with LLVM 7. According to the backtraces these crashes happened during the call to llvm_shutdown, during cleanup after another out of memory condition. It seems that calls to LLVMOrcDisposeInstance, can crash (at least on LLVM 7) when LLVM is left in bad state. I attached the relevant part of the stacktrace to this email.

With the attached patch these segfaults went away. The patch turns llvm_shutdown into a no-op whenever the backend is exiting with an error. Based on my understanding of the code this should be totally fine. No memory should be leaked, since all memory will be cleaned up anyway once the backend exits shortly after. The only reason this cleanup code even seems to exist at all is to get useful LLVM profiling data. To me it seems be acceptable if the profiling data is incorrect/missing when the backend exits with an error.  

Jelte
Hi,
Minor comment:

+        * shut LLVM down, this can result into a segfault. So if this process 

result into a segfault -> result in a segfault

Cheers

Re: Don't clean up LLVM state when exiting in a bad way

От
Justin Pryzby
Дата:
On Wed, Aug 18, 2021 at 03:00:59PM +0000, Jelte Fennema wrote:
> I ran into some segfaults when using Postgres that was compiled with LLVM 7. According to the backtraces these
crasheshappened during the call to llvm_shutdown, during cleanup after another out of memory condition. It seems that
callsto LLVMOrcDisposeInstance, can crash (at least on LLVM 7) when LLVM is left in bad state. I attached the relevant
partof the stacktrace to this email.
 
> 
> With the attached patch these segfaults went away. The patch turns llvm_shutdown into a no-op whenever the backend is
exitingwith an error. Based on my understanding of the code this should be totally fine. No memory should be leaked,
sinceall memory will be cleaned up anyway once the backend exits shortly after. The only reason this cleanup code even
seemsto exist at all is to get useful LLVM profiling data. To me it seems be acceptable if the profiling data is
incorrect/missingwhen the backend exits with an error.
 

Andres , could you comment on this ?

This seems to explain the crash I reported to you when testing your WIP patches
for the JIT memory leak.  I realize now that the crash happens without your
patches.
https://www.postgresql.org/message-id/20210419164130.byegpfrw46mzagcu@alap3.anarazel.de

I can reproduce the crash on master (not just v13, as I said before) compiled
on centos7, with:
LLVM_CONFIG=/usr/lib64/llvm7.0/bin/llvm-config CLANG=/opt/rh/llvm-toolset-7.0/root/usr/bin/clang

I cannot reproduce the crash after applying Jelte's patch.

I couldn't crash on ubuntu either, so maybe they have a patch which fixes this,
or maybe RH applied a patch which caused it...

postgres=# CREATE TABLE t AS SELECT i FROM generate_series(1,999999)i; VACUUM ANALYZE t;
postgres=# SET client_min_messages=debug; SET statement_timeout=333; SET jit_above_cost=0; SET
jit_optimize_above_cost=-1;SET jit_inline_above_cost=-1; explain analyze SELECT sum(i) FROM t a NATURAL JOIN t b;
 
2021-09-05 22:47:12.807 ADT client backend[7563] psql ERROR:  canceling statement due to statement timeout
2021-09-05 22:47:12.880 ADT postmaster[7272] LOG:  background worker "parallel worker" (PID 8212) was terminated by
signal11: Segmentation fault
 

-- 
Justin



Re: Don't clean up LLVM state when exiting in a bad way

От
Andres Freund
Дата:
Hi,

On 2021-08-18 15:00:59 +0000, Jelte Fennema wrote:
> I ran into some segfaults when using Postgres that was compiled with LLVM
> 7. According to the backtraces these crashes happened during the call to
> llvm_shutdown, during cleanup after another out of memory condition. It
> seems that calls to LLVMOrcDisposeInstance, can crash (at least on LLVM 7)
> when LLVM is left in bad state. I attached the relevant part of the
> stacktrace to this email.

> With the attached patch these segfaults went away. The patch turns
> llvm_shutdown into a no-op whenever the backend is exiting with an
> error. Based on my understanding of the code this should be totally fine. No
> memory should be leaked, since all memory will be cleaned up anyway once the
> backend exits shortly after. The only reason this cleanup code even seems to
> exist at all is to get useful LLVM profiling data. To me it seems be
> acceptable if the profiling data is incorrect/missing when the backend exits
> with an error.

I think this is a tad too strong. We should continue to clean up on exit as
long as the error didn't happen while we're already inside llvm
code. Otherwise we loose some ability to find leaks. How about checking in the
error path whether fatal_new_handler_depth is > 0, and skipping cleanup in
that case? Because that's precisely when it should be unsafe to reenter LLVM.

Greetings,

Andres Freund



Re: Don't clean up LLVM state when exiting in a bad way

От
Alexander Lakhin
Дата:
Hello hackers,
14.09.2021 04:32, Andres Freund wrote:
> On 2021-09-07 14:44:39 -0500, Justin Pryzby wrote:
>> On Tue, Sep 07, 2021 at 12:27:27PM -0700, Andres Freund wrote:
>>> I think this is a tad too strong. We should continue to clean up on exit as
>>> long as the error didn't happen while we're already inside llvm
>>> code. Otherwise we loose some ability to find leaks. How about checking in the
>>> error path whether fatal_new_handler_depth is > 0, and skipping cleanup in
>>> that case? Because that's precisely when it should be unsafe to reenter
>>> LLVM.
> The more important reason is actually profiling information that needs to be
> written out.
>
> I've now pushed a fix to all relevant branches. Thanks all!
>
I've encountered similar issue last week, but found this discussion only
after the commit.
I'm afraid that it's not completely gone yet. I've reproduced a similar
crash (on edb4d95d) with
echo "statement_timeout = 50
jit_optimize_above_cost = 1
jit_inline_above_cost = 1
parallel_setup_cost=0
parallel_tuple_cost=0
" >/tmp/extra.config
TEMP_CONFIG=/tmp/extra.config  make check

parallel group (11 tests):  memoize explain hash_part partition_info
reloptions tuplesort compression partition_aggregate indexing
partition_prune partition_join
     partition_join               ... FAILED (test process exited with
exit code 2)     1815 ms
     partition_prune              ... FAILED (test process exited with
exit code 2)     1779 ms
     reloptions                   ... ok          146 ms

I've extracted the crash-causing fragment from the partition_prune test
to reproduce the segfault reliably (see the patch attached).
The segfault stack is:
Core was generated by `postgres: parallel worker for PID
12029                                       '.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f045e0a88ca in notifyFreed (K=<optimized out>, Obj=...,
this=<optimized out>)
    at
/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:485
485           Listener->NotifyFreeingObject(Obj);
(gdb) bt
#0  0x00007f045e0a88ca in notifyFreed (K=<optimized out>, Obj=...,
this=<optimized out>)
    at
/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:485
#1  operator() (K=<optimized out>, Obj=..., __closure=<optimized out>)
    at
/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:226
#2  std::_Function_handler<void (unsigned long, llvm::object::ObjectFile
const&),
llvm::OrcCBindingsStack::OrcCBindingsStack(llvm::TargetMachine&,
std::function<std::unique_ptr<llvm::orc::IndirectStubsManager,
std::default_delete<llvm::orc::IndirectStubsManager> >
()>)::{lambda(unsigned long, llvm::object::ObjectFile
const&)#3}>::_M_invoke(std::_Any_data const&, unsigned long,
llvm::object::ObjectFile const&) (__functor=..., __args#0=<optimized
out>, __args#1=...)
    at /usr/include/c++/4.8.2/functional:2071
#3  0x00007f045e0aa578 in operator() (__args#1=..., __args#0=<optimized
out>, this=<optimized out>)
    at /usr/include/c++/4.8.2/functional:2471
...

The corresponding code in OrcCBindingsStack.h is:
void notifyFreed(orc::VModuleKey K, const object::ObjectFile &Obj) {
    for (auto &Listener : EventListeners)
     Listener->NotifyFreeingObject(Obj);
}
So probably one of the EventListeners has become null. I see that
without debugging and profiling enabled the only listener registration
in the postgres code is LLVMOrcRegisterJITEventListener.

With LLVM 9 on the same Centos 7 I don't get such segfault. Also it
doesn't happen on different OSes with LLVM 7. I still have no
explanation for that, but maybe there is difference between LLVM
configure options, e.g. like this:
https://stackoverflow.com/questions/47712670/segmentation-fault-in-llvm-pass-when-using-registerstandardpasses

Best regards,
Alexander

Вложения

Re: Don't clean up LLVM state when exiting in a bad way

От
Andres Freund
Дата:
Hi,

On September 13, 2021 9:00:00 PM PDT, Alexander Lakhin <exclusion@gmail.com> wrote:
>Hello hackers,
>14.09.2021 04:32, Andres Freund wrote:
>> On 2021-09-07 14:44:39 -0500, Justin Pryzby wrote:
>>> On Tue, Sep 07, 2021 at 12:27:27PM -0700, Andres Freund wrote:
>>>> I think this is a tad too strong. We should continue to clean up on exit as
>>>> long as the error didn't happen while we're already inside llvm
>>>> code. Otherwise we loose some ability to find leaks. How about checking in the
>>>> error path whether fatal_new_handler_depth is > 0, and skipping cleanup in
>>>> that case? Because that's precisely when it should be unsafe to reenter
>>>> LLVM.
>> The more important reason is actually profiling information that needs to be
>> written out.
>>
>> I've now pushed a fix to all relevant branches. Thanks all!
>>
>I've encountered similar issue last week, but found this discussion only
>after the commit.
>I'm afraid that it's not completely gone yet. I've reproduced a similar
>crash (on edb4d95d) with
>echo "statement_timeout = 50
>jit_optimize_above_cost = 1
>jit_inline_above_cost = 1
>parallel_setup_cost=0
>parallel_tuple_cost=0
>" >/tmp/extra.config
>TEMP_CONFIG=/tmp/extra.config  make check
>
>parallel group (11 tests):  memoize explain hash_part partition_info
>reloptions tuplesort compression partition_aggregate indexing
>partition_prune partition_join
>     partition_join               ... FAILED (test process exited with
>exit code 2)     1815 ms
>     partition_prune              ... FAILED (test process exited with
>exit code 2)     1779 ms
>     reloptions                   ... ok          146 ms
>
>I've extracted the crash-causing fragment from the partition_prune test
>to reproduce the segfault reliably (see the patch attached).
>The segfault stack is:
>Core was generated by `postgres: parallel worker for PID
>12029                                       '.
>Program terminated with signal 11, Segmentation fault.
>#0  0x00007f045e0a88ca in notifyFreed (K=<optimized out>, Obj=...,
>this=<optimized out>)
>    at
>/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:485
>485           Listener->NotifyFreeingObject(Obj);
>(gdb) bt
>#0  0x00007f045e0a88ca in notifyFreed (K=<optimized out>, Obj=...,
>this=<optimized out>)
>    at
>/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:485
>#1  operator() (K=<optimized out>, Obj=..., __closure=<optimized out>)
>    at
>/usr/src/debug/llvm-7.0.1.src/lib/ExecutionEngine/Orc/OrcCBindingsStack.h:226
>#2  std::_Function_handler<void (unsigned long, llvm::object::ObjectFile
>const&),
>llvm::OrcCBindingsStack::OrcCBindingsStack(llvm::TargetMachine&,
>std::function<std::unique_ptr<llvm::orc::IndirectStubsManager,
>std::default_delete<llvm::orc::IndirectStubsManager> >
>()>)::{lambda(unsigned long, llvm::object::ObjectFile
>const&)#3}>::_M_invoke(std::_Any_data const&, unsigned long,
>llvm::object::ObjectFile const&) (__functor=..., __args#0=<optimized
>out>, __args#1=...)
>    at /usr/include/c++/4.8.2/functional:2071
>#3  0x00007f045e0aa578 in operator() (__args#1=..., __args#0=<optimized
>out>, this=<optimized out>)
>    at /usr/include/c++/4.8.2/functional:2471
>...
>
>The corresponding code in OrcCBindingsStack.h is:
>void notifyFreed(orc::VModuleKey K, const object::ObjectFile &Obj) {
>    for (auto &Listener : EventListeners)
>     Listener->NotifyFreeingObject(Obj);
>}
>So probably one of the EventListeners has become null. I see that
>without debugging and profiling enabled the only listener registration
>in the postgres code is LLVMOrcRegisterJITEventListener.
>
>With LLVM 9 on the same Centos 7 I don't get such segfault. Also it
>doesn't happen on different OSes with LLVM 7.

That just like an llvm bug to me. Rather than the usage issue addressed in this thread.


 I still have no
>explanation for that, but maybe there is difference between LLVM
>configure options, e.g. like this:
>https://stackoverflow.com/questions/47712670/segmentation-fault-in-llvm-pass-when-using-registerstandardpasses

Why is it not much more likely that bugs were fixed?


Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: Don't clean up LLVM state when exiting in a bad way

От
Alexander Lakhin
Дата:
Hello Andres,
14.09.2021 08:05, Andres Freund wrote:
>
>> With LLVM 9 on the same Centos 7 I don't get such segfault. Also it
>> doesn't happen on different OSes with LLVM 7.
> That just like an llvm bug to me. Rather than the usage issue addressed in this thread.
But Justin seen this strangeness too:
>
> I couldn't crash on ubuntu either, so maybe they have a patch which
> fixes this,
> or maybe RH applied a patch which caused it...
>
The script that Justin presented:
> postgres=# CREATE TABLE t AS SELECT i FROM generate_series(1,999999)i;
> VACUUM ANALYZE t;
> postgres=# SET client_min_messages=debug; SET statement_timeout=333;
> SET jit_above_cost=0; SET jit_optimize_above_cost=-1; SET
> jit_inline_above_cost=-1; explain analyze SELECT sum(i) FROM t a
> NATURAL JOIN t b;
causes the server crash on Centos 7 with LLVM 7 (even with the fix
applied) and doesn't crash with LLVM 9 (I used llvm-toolset-9* and
devtoolset-9* packages from
https://buildlogs.centos.org/c7-llvm-toolset-9.0.x86_64/ and
https://buildlogs.centos.org/c7-devtoolset-9.x86_64/ repositories).

The another script:
>
> python3 -c "import pg; db=pg.DB('dbname=postgres host=/tmp
> port=5678'); db.query('SET jit_above_cost=0; SET
> jit_inline_above_cost=-1; SET jit=on; SET client_min_messages=debug');
> db.query('begin'); db.query_formatted('SELECT 1 FROM
> generate_series(1,99)a WHERE a=%s', [1], inline=False);"
>
also causes the server crash with LLVM 7, and doesn't crash with LLVM 9.

So I wonder, isn't the fixed usage issue specific to LLVM 7, which is
not going to be supported as having some bugs?

Best regards,
Alexander



Re: [EXTERNAL] Re: Don't clean up LLVM state when exiting in a bad way

От
Jelte Fennema
Дата:
> So I wonder, isn't the fixed usage issue specific to LLVM 7

That's definitely possible. I was unable to reproduce the issue I shared in my original email when postgres was
compiledwith LLVM 10.  

That's also why I sent an email to the pgsql-pkg-yum mailing list about options to use a newer version of LLVM on
CentOS7:
https://www.postgresql.org/message-id/flat/AM5PR83MB0178475D87EFA290A4D0793DF7FF9%40AM5PR83MB0178.EURPRD83.prod.outlook.com
(noresponse so far though)