Обсуждение: elog(FATAL) vs shared memory

Поиск
Список
Период
Сортировка

elog(FATAL) vs shared memory

От
Tom Lane
Дата:
In this thread:
http://archives.postgresql.org/pgsql-bugs/2007-03/msg00145.php
we eventually determined that the reported lockup had three components:

(1) something (still not sure what --- Martin and Mark, I'd really like
to know) was issuing random SIGTERMs to various postgres processes
including autovacuum.

(2) if a SIGTERM happens to arrive while btbulkdelete is running,
the next CHECK_FOR_INTERRUPTS will do elog(FATAL), causing elog.c
to do proc_exit(0), leaving the vacuum still recorded as active in
the shared memory array maintained by _bt_start_vacuum/_bt_end_vacuum.
The PG_TRY block in btbulkdelete doesn't get a chance to clean up.

(3) eventually, either we try to re-vacuum the same index or
accumulation of bogus active entries overflows the array.
Either way, _bt_start_vacuum throws an error, which btbulkdelete
PG_CATCHes, leading to_bt_end_vacuum trying to re-acquire the LWLock
already taken by _bt_start_vacuum, meaning that the process hangs up.
And then so does anything else that needs to take that LWLock...

Point (3) is already fixed in CVS, but point (2) is a lot nastier.
What it essentially says is that trying to clean up shared-memory
state in a PG_TRY block is unsafe: you can't be certain you'll
get to do it.  Now this is not a big deal during normal SIGTERM or
SIGQUIT database shutdown, because we're going to abandon the shared
memory segment anyway.  However, if we ever want to support individual
session kill via SIGTERM, it's a problem.  Even if we were not
interested in someday considering that a supported feature, it seems
that dealing with random SIGTERMs is needed for robustness in at least
some environments.

AFAICS, there are basically two ways we might try to approach this:

Plan A: establish the rule that you mustn't try to clean up shared
memory state in a PG_CATCH block.  Anything you need to do like that
has to be handled by an on_shmem_exit hook function, so it will be
called during a FATAL exit.  (Or maybe you can do it in PG_CATCH for
normal ERROR cases, but you need a backing on_shmem_exit hook to
clean up for FATAL.)

Plan B: change the handling of FATAL errors so that they are thrown
like normal errors, and the proc_exit call happens only when we get
out to the outermost control level in postgres.c.  This would mean
that PG_CATCH blocks get a chance to clean up before the FATAL exit
happens.  The problem with that is that a non-cooperative PG_CATCH
block might think it could "recover" from the error, and then the exit
does not happen at all.  We'd need a coding rule that PG_CATCH blocks
*must* re-throw FATAL errors, which seems at least as ugly as Plan A.
In particular, all three of the external-interpreter PLs are willing
to return errors into the external interpreter, and AFAICS we'd be
entirely at the mercy of the user-written Perl or Python or Tcl code
whether it re-throws the error or not.

So Plan B seems unacceptably fragile.  Does anyone see a way to fix it,
or perhaps a Plan C with a totally different idea?  Plan A seems pretty
ugly but it's the best I can come up with.
        regards, tom lane


Re: elog(FATAL) vs shared memory

От
Mark Shuttleworth
Дата:
Tom Lane wrote: <blockquote cite="mid23609.1175709890@sss.pgh.pa.us" type="cite"><pre wrap="">(1) something (still not
surewhat --- Martin and Mark, I'd really like
 
to know) was issuing random SIGTERMs to various postgres processes
including autovacuum. </pre></blockquote><br /> This may be a misfeature in our test harness - I'll ask Stuart Bishop
tocomment.<br /><br /> Mark<br /> 

Re: elog(FATAL) vs shared memory

От
Stuart Bishop
Дата:
Mark Shuttleworth wrote:
> Tom Lane wrote:
>> (1) something (still not sure what --- Martin and Mark, I'd really like
>> to know) was issuing random SIGTERMs to various postgres processes
>> including autovacuum.
>>
>
> This may be a misfeature in our test harness - I'll ask Stuart Bishop to
> comment.

After a test is run, the test harness kills any outstanding connections so
we can drop the test database. Without this, a failing test could leave open
connections dangling causing the drop database to block.

CREATE OR REPLACE FUNCTION _killall_backends(text)
RETURNS Boolean AS $$   import os   from signal import SIGTERM
   plan = plpy.prepare(       "SELECT procpid FROM pg_stat_activity WHERE datname=$1", ['text']       )   success =
True  for row in plpy.execute(plan, args):       try:           plpy.info("Killing %d" % row['procpid'])
os.kill(row['procpid'],SIGTERM)       except OSError:           success = False 
   return success
$$ LANGUAGE plpythonu;

--
Stuart Bishop <stuart.bishop@canonical.com>   http://www.canonical.com/
Canonical Ltd.                                http://www.ubuntu.com/


Re: elog(FATAL) vs shared memory

От
Tom Lane
Дата:
Stuart Bishop <stuart.bishop@canonical.com> writes:
> After a test is run, the test harness kills any outstanding connections so
> we can drop the test database. Without this, a failing test could leave open
> connections dangling causing the drop database to block.

Just to make it perfectly clear: we don't consider SIGTERMing individual
backends to be a supported operation (maybe someday, but not today).
That's why you had to resort to plpythonu to do this.  I hope you don't
have anything analogous in your production databases ...
        regards, tom lane


Re: elog(FATAL) vs shared memory

От
Mark Shuttleworth
Дата:
Tom Lane wrote: <blockquote cite="mid14772.1175879965@sss.pgh.pa.us" type="cite"><pre wrap="">Stuart Bishop <a
class="moz-txt-link-rfc2396E"href="mailto:stuart.bishop@canonical.com"><stuart.bishop@canonical.com></a> writes:
</pre><blockquotetype="cite"><pre wrap="">After a test is run, the test harness kills any outstanding connections so
 
we can drop the test database. Without this, a failing test could leave open
connections dangling causing the drop database to block.   </pre></blockquote><pre wrap="">
Just to make it perfectly clear: we don't consider SIGTERMing individual
backends to be a supported operation (maybe someday, but not today).
That's why you had to resort to plpythonu to do this.  I hope you don't
have anything analogous in your production databases ... </pre></blockquote> Ah, that could explain it. With the recent
patchesit seems to be working OK, but I guess we should find a more standard way to rejig the db during the test
runs.<br/><br /> Mark<br /> 

Re: elog(FATAL) vs shared memory

От
Stuart Bishop
Дата:
Tom Lane wrote:
> Stuart Bishop <stuart.bishop@canonical.com> writes:
>> After a test is run, the test harness kills any outstanding connections so
>> we can drop the test database. Without this, a failing test could leave open
>> connections dangling causing the drop database to block.
>
> Just to make it perfectly clear: we don't consider SIGTERMing individual
> backends to be a supported operation (maybe someday, but not today).
> That's why you had to resort to plpythonu to do this.  I hope you don't
> have anything analogous in your production databases ...

No - just the test suite. It seems the only way to terminate any open
connections, which is a requirement for hooking PostgreSQL up to a test
suite or any other situation where you need to drop a database *now* rather
than when your clients decide to disconnect (well... unless we refactor to
start a dedicated postgres instance for each test, but our overheads are
already pretty huge).

--
Stuart Bishop <stuart.bishop@canonical.com>   http://www.canonical.com/
Canonical Ltd.                                http://www.ubuntu.com/


Re: elog(FATAL) vs shared memory

От
Jim Nasby
Дата:
FWIW, you might want to put some safeguards in there so that you  
don't try to inadvertently kill the backend that's running that  
function... unfortunately I don't think there's a built-in function  
to tell you the PID of the backend you're connected to; if you're  
connecting via TCP you could use inet_client_addr() and  
inet_client_port(), but that won't work if you're using the socket to  
connect.

On Apr 5, 2007, at 6:23 AM, Stuart Bishop wrote:

> Mark Shuttleworth wrote:
>> Tom Lane wrote:
>>> (1) something (still not sure what --- Martin and Mark, I'd  
>>> really like
>>> to know) was issuing random SIGTERMs to various postgres processes
>>> including autovacuum.
>>>
>>
>> This may be a misfeature in our test harness - I'll ask Stuart  
>> Bishop to
>> comment.
>
> After a test is run, the test harness kills any outstanding  
> connections so
> we can drop the test database. Without this, a failing test could  
> leave open
> connections dangling causing the drop database to block.
>
> CREATE OR REPLACE FUNCTION _killall_backends(text)
> RETURNS Boolean AS $$
>     import os
>     from signal import SIGTERM
>
>     plan = plpy.prepare(
>         "SELECT procpid FROM pg_stat_activity WHERE datname=$1",  
> ['text']
>         )
>     success = True
>     for row in plpy.execute(plan, args):
>         try:
>             plpy.info("Killing %d" % row['procpid'])
>             os.kill(row['procpid'], SIGTERM)
>         except OSError:
>             success = False
>
>     return success
> $$ LANGUAGE plpythonu;
>
> -- 
> Stuart Bishop <stuart.bishop@canonical.com>   http:// 
> www.canonical.com/
> Canonical Ltd.                                http://www.ubuntu.com/
>

--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)




Re: elog(FATAL) vs shared memory

От
Heikki Linnakangas
Дата:
Tom Lane wrote:
> 2) if a SIGTERM happens to arrive while btbulkdelete is running,
> the next CHECK_FOR_INTERRUPTS will do elog(FATAL), causing elog.c
> to do proc_exit(0), leaving the vacuum still recorded as active in
> the shared memory array maintained by _bt_start_vacuum/_bt_end_vacuum.
> The PG_TRY block in btbulkdelete doesn't get a chance to clean up.

I skimmed through all users of PG_TRY/CATCH in the backend to check if 
there's other problems like that looming. There's one that looks 
dangerous in pg_start_backup() in xlog.c. forcePageWrites flag in shared 
memory is cleared in a PG_CATCH block. It's not as severe, though, as it 
can be cleared manually by calling pg_stop_backup(), and only leads to 
degraded performance.

> (3) eventually, either we try to re-vacuum the same index or
> accumulation of bogus active entries overflows the array.
> Either way, _bt_start_vacuum throws an error, which btbulkdelete
> PG_CATCHes, leading to_bt_end_vacuum trying to re-acquire the LWLock
> already taken by _bt_start_vacuum, meaning that the process hangs up.
> And then so does anything else that needs to take that LWLock...

I also looked for other occurances of point (3), but couldn't find any, 
so I guess we're now safe from it.

> Point (3) is already fixed in CVS, but point (2) is a lot nastier.
> What it essentially says is that trying to clean up shared-memory
> state in a PG_TRY block is unsafe: you can't be certain you'll
> get to do it.  Now this is not a big deal during normal SIGTERM or
> SIGQUIT database shutdown, because we're going to abandon the shared
> memory segment anyway.  However, if we ever want to support individual
> session kill via SIGTERM, it's a problem.  Even if we were not
> interested in someday considering that a supported feature, it seems
> that dealing with random SIGTERMs is needed for robustness in at least
> some environments.

Agreed. We should do our best to be safe from SIGTERMs, even if we don't 
consider it supported.

> AFAICS, there are basically two ways we might try to approach this:
> 
> Plan A: establish the rule that you mustn't try to clean up shared
> memory state in a PG_CATCH block.  Anything you need to do like that
> has to be handled by an on_shmem_exit hook function, so it will be
> called during a FATAL exit.  (Or maybe you can do it in PG_CATCH for
> normal ERROR cases, but you need a backing on_shmem_exit hook to
> clean up for FATAL.)
> 
> Plan B: change the handling of FATAL errors so that they are thrown
> like normal errors, and the proc_exit call happens only when we get
> out to the outermost control level in postgres.c.  This would mean
> that PG_CATCH blocks get a chance to clean up before the FATAL exit
> happens.  The problem with that is that a non-cooperative PG_CATCH
> block might think it could "recover" from the error, and then the exit
> does not happen at all.  We'd need a coding rule that PG_CATCH blocks
> *must* re-throw FATAL errors, which seems at least as ugly as Plan A.
> In particular, all three of the external-interpreter PLs are willing
> to return errors into the external interpreter, and AFAICS we'd be
> entirely at the mercy of the user-written Perl or Python or Tcl code
> whether it re-throws the error or not.
> 
> So Plan B seems unacceptably fragile.  Does anyone see a way to fix it,
> or perhaps a Plan C with a totally different idea?  Plan A seems pretty
> ugly but it's the best I can come up with.

Yeah, plan A seems like the way to go.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: elog(FATAL) vs shared memory

От
Jim Nasby
Дата:
On Apr 11, 2007, at 6:23 PM, Jim Nasby wrote:
> FWIW, you might want to put some safeguards in there so that you  
> don't try to inadvertently kill the backend that's running that  
> function... unfortunately I don't think there's a built-in function  
> to tell you the PID of the backend you're connected to; if you're  
> connecting via TCP you could use inet_client_addr() and  
> inet_client_port(), but that won't work if you're using the socket  
> to connect.

*wipes egg off face*

There is a pg_backend_pid() function, even if it's not documented  
with the other functions (it's in the stats function stuff for some  
reason).
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)




Re: elog(FATAL) vs shared memory

От
Gregory Stark
Дата:
"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

> Tom Lane wrote:
>
>> AFAICS, there are basically two ways we might try to approach this:
>>
>> Plan A: establish the rule that you mustn't try to clean up shared
>> memory state in a PG_CATCH block.  Anything you need to do like that
>> has to be handled by an on_shmem_exit hook function, so it will be
>> called during a FATAL exit.  (Or maybe you can do it in PG_CATCH for
>> normal ERROR cases, but you need a backing on_shmem_exit hook to
>> clean up for FATAL.)
>>...
>> So Plan B seems unacceptably fragile.  Does anyone see a way to fix it,
>> or perhaps a Plan C with a totally different idea?  Plan A seems pretty
>> ugly but it's the best I can come up with.
>
> Yeah, plan A seems like the way to go.

The alternative is that instead of a general purpose shmem hook you note the
pid of the process that is expecting to handle the cleanup. So for instance
something like pg_start_backup instead of setting a flag would store its pid.
Then someone else who comes along and finds the field set has to double check
if the pid is actually still around and if not it has to clean it up itself.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com



Re: elog(FATAL) vs shared memory

От
Stuart Bishop
Дата:
Jim Nasby wrote:
> On Apr 11, 2007, at 6:23 PM, Jim Nasby wrote:
>> FWIW, you might want to put some safeguards in there so that you don't
>> try to inadvertently kill the backend that's running that function...
>> unfortunately I don't think there's a built-in function to tell you
>> the PID of the backend you're connected to; if you're connecting via
>> TCP you could use inet_client_addr() and inet_client_port(), but that
>> won't work if you're using the socket to connect.
>
> *wipes egg off face*
>
> There is a pg_backend_pid() function, even if it's not documented with
> the other functions (it's in the stats function stuff for some reason).

eh. No worries - my safeguard is just a comment saying 'don't connect to the
same database you are killing the connections of' :-)


--
Stuart Bishop <stuart.bishop@canonical.com>   http://www.canonical.com/
Canonical Ltd.                                http://www.ubuntu.com/


Re: elog(FATAL) vs shared memory

От
Bruce Momjian
Дата:
Where are we on this?

---------------------------------------------------------------------------

Tom Lane wrote:
> In this thread:
> http://archives.postgresql.org/pgsql-bugs/2007-03/msg00145.php
> we eventually determined that the reported lockup had three components:
> 
> (1) something (still not sure what --- Martin and Mark, I'd really like
> to know) was issuing random SIGTERMs to various postgres processes
> including autovacuum.
> 
> (2) if a SIGTERM happens to arrive while btbulkdelete is running,
> the next CHECK_FOR_INTERRUPTS will do elog(FATAL), causing elog.c
> to do proc_exit(0), leaving the vacuum still recorded as active in
> the shared memory array maintained by _bt_start_vacuum/_bt_end_vacuum.
> The PG_TRY block in btbulkdelete doesn't get a chance to clean up.
> 
> (3) eventually, either we try to re-vacuum the same index or
> accumulation of bogus active entries overflows the array.
> Either way, _bt_start_vacuum throws an error, which btbulkdelete
> PG_CATCHes, leading to_bt_end_vacuum trying to re-acquire the LWLock
> already taken by _bt_start_vacuum, meaning that the process hangs up.
> And then so does anything else that needs to take that LWLock...
> 
> Point (3) is already fixed in CVS, but point (2) is a lot nastier.
> What it essentially says is that trying to clean up shared-memory
> state in a PG_TRY block is unsafe: you can't be certain you'll
> get to do it.  Now this is not a big deal during normal SIGTERM or
> SIGQUIT database shutdown, because we're going to abandon the shared
> memory segment anyway.  However, if we ever want to support individual
> session kill via SIGTERM, it's a problem.  Even if we were not
> interested in someday considering that a supported feature, it seems
> that dealing with random SIGTERMs is needed for robustness in at least
> some environments.
> 
> AFAICS, there are basically two ways we might try to approach this:
> 
> Plan A: establish the rule that you mustn't try to clean up shared
> memory state in a PG_CATCH block.  Anything you need to do like that
> has to be handled by an on_shmem_exit hook function, so it will be
> called during a FATAL exit.  (Or maybe you can do it in PG_CATCH for
> normal ERROR cases, but you need a backing on_shmem_exit hook to
> clean up for FATAL.)
> 
> Plan B: change the handling of FATAL errors so that they are thrown
> like normal errors, and the proc_exit call happens only when we get
> out to the outermost control level in postgres.c.  This would mean
> that PG_CATCH blocks get a chance to clean up before the FATAL exit
> happens.  The problem with that is that a non-cooperative PG_CATCH
> block might think it could "recover" from the error, and then the exit
> does not happen at all.  We'd need a coding rule that PG_CATCH blocks
> *must* re-throw FATAL errors, which seems at least as ugly as Plan A.
> In particular, all three of the external-interpreter PLs are willing
> to return errors into the external interpreter, and AFAICS we'd be
> entirely at the mercy of the user-written Perl or Python or Tcl code
> whether it re-throws the error or not.
> 
> So Plan B seems unacceptably fragile.  Does anyone see a way to fix it,
> or perhaps a Plan C with a totally different idea?  Plan A seems pretty
> ugly but it's the best I can come up with.
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that your
>        message can get through to the mailing list cleanly

--  Bruce Momjian  <bruce@momjian.us>          http://momjian.us EnterpriseDB
http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: elog(FATAL) vs shared memory

От
Tom Lane
Дата:
Bruce Momjian <bruce@momjian.us> writes:
> Where are we on this?

Still trying to think of a less messy solution...

>> What it essentially says is that trying to clean up shared-memory
>> state in a PG_TRY block is unsafe: you can't be certain you'll
>> get to do it.
        regards, tom lane


Re: elog(FATAL) vs shared memory

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <bruce@momjian.us> writes:
> > Where are we on this?
> 
> Still trying to think of a less messy solution...

OK, put in the patches hold queue for 8.4.

---------------------------------------------------------------------------


> 
> >> What it essentially says is that trying to clean up shared-memory
> >> state in a PG_TRY block is unsafe: you can't be certain you'll
> >> get to do it.
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match

--  Bruce Momjian  <bruce@momjian.us>          http://momjian.us EnterpriseDB
http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +