Re: hung backends stuck in spinlock heavy endless loop

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: hung backends stuck in spinlock heavy endless loop
Дата
Msg-id CAHyXU0zgrck=U0zjeMU49vuZ-q7jNy0K2jHD72miigHv-wT5Sw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: hung backends stuck in spinlock heavy endless loop  (Merlin Moncure <mmoncure@gmail.com>)
Ответы Re: hung backends stuck in spinlock heavy endless loop  (Andres Freund <andres@2ndquadrant.com>)
Список pgsql-hackers
On Thu, Jan 15, 2015 at 8:02 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
> On Thu, Jan 15, 2015 at 6:04 AM, Heikki Linnakangas
> <hlinnakangas@vmware.com> wrote:
>> On 01/15/2015 03:23 AM, Peter Geoghegan wrote:
>>>
>>> So now the question is: how did that inconsistency arise? It didn't
>>> necessarily arise at the time of the (presumed) split of block 2 to
>>> create 9. It could be that the opaque area was changed by something
>>> else, some time later. I'll investigate more.
>>
>>
>> Merlin, could you re-run the test with a WAL archive (if you don't have one
>> already), and then run pg_xlogdump, filtering it to show only the changes to
>> the index? That should show us how the index got to be the way it is. Also,
>> if you could post a copy of the raw relation file for pg_class_oid_index; I
>> assume it's not too large.
>>
>> Something like:
>>
>> pg_xlogdump -r Btree -p walarchive/ -s 0/20035D0 | grep 11917
>>
>> 11917 is the relfilenode of pg_class_oid_index on a freshly initdb'd
>> cluster. In case it's not the same on your system, you can use oid2name to
>> find it out.
>
> I'm on it.  Will try this first, then patch removal.
>
> Question: Coming in this morning I did an immediate restart and logged
> into the database and queried pg_class via index.   Everything was
> fine, and the leftright verify returns nothing.  How did it repair
> itself without a reindex?

The plot thickens!  I looped the test, still stock 9.4 as of this time
and went to lunch. When I came back, the database was in recovery
mode.  Here is the rough sequence of events.

1) REINDEXED pg_class (wanted clean slate for full reproduction)
2) before too long (unfortunately did not configure timestamps in the
log) starting seeing:

ERROR:  root page 3 of index "pg_class_oid_index" has level 0,
expected 1 at character 8
QUERY:  UPDATE CDSRunTable SET         Finished = clock_timestamp(),         DidSucceed = _DidSucceed,
ErrorMessage= _ErrorMessage       WHERE CDSRunTableId = _CdsRunTableId
 
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 82 at
SQL statement
STATEMENT:  SELECT CDSReconcileRunTable(2020)

..and again with a FATAL
FATAL:  root page 3 of index "pg_class_oid_index" has level 0, expected 1
ERROR:  root page 3 of index "pg_class_oid_index" has level 0, expected 1
CONTEXT:  SQL statement "UPDATE CDSRunTable SET         Finished = clock_timestamp(),         DidSucceed = _DidSucceed,
       ErrorMessage = _ErrorMessage       WHERE CDSRunTableId = _CdsRunTableId"   PL/pgSQL function
cdsreconcileruntable(bigint)line 82 at SQL statement
 


3) shortly (?) after that, I got:

WARNING:  did not find subXID 14955 in MyProc
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 35
during exception cleanup
WARNING:  you don't own a lock of type RowExclusiveLock
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 35
during exception cleanup
LOG:  could not send data to client: Broken pipe
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 35
during exception cleanup
STATEMENT:  SELECT CDSReconcileRunTable(2151)
WARNING:  ReleaseLockIfHeld: failed??
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 35
during exception cleanup
ERROR:  failed to re-find shared proclock object
CONTEXT:  PL/pgSQL function cdsreconcileruntable(bigint) line 35
during exception cleanup
STATEMENT:  SELECT CDSReconcileRunTable(2151)
WARNING:  AbortSubTransaction while in ABORT state
WARNING:  did not find subXID 14955 in MyProc
WARNING:  you don't own a lock of type AccessShareLock
WARNING:  ReleaseLockIfHeld: failed??
ERROR:  failed to re-find shared proclock object
WARNING:  AbortSubTransaction while in ABORT state
WARNING:  did not find subXID 14955 in MyProc
WARNING:  you don't own a lock of type AccessShareLock
WARNING:  ReleaseLockIfHeld: failed??
WARNING:  you don't own a lock of type ShareLock
TRAP: FailedAssertion("!(FastPathStrongRelationLocks->count[fasthashcode]
> 0)", File: "lock.c", Line: 1240)
LOG:  server process (PID 10117) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes

4) while I was still at lunch, the database started up again, and
quickly fell over:
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
ERROR:  could not open relation with OID 3597
STATEMENT:  SELECT CDSReconcileRunTable(2268)
ERROR:  could not open relation with OID 3597
STATEMENT:  SELECT CDSReconcileRunTable(2270)
ERROR:  could not open relation with OID 3597
STATEMENT:  SELECT CDSReconcileRunTable(2272)
ERROR:  could not open relation with OID 3597
STATEMENT:  SELECT CDSReconcileRunTable(2273)
TRAP: FailedAssertion("!(( ((void) ((bool) ((! assert_enabled) || !
(!(((bool) (((const void*)(&htup->t_ctid) != ((void *)0)) &&
((&htup->t_ctid)->ip_posid != 0)
LOG:  server process (PID 30520) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes

this repeated a very large number of times.   Occasionally, but
infrequently, I'd see during the database startup something like:
LOG:  unexpected pageaddr 50/1F2D8000 in log segment
0000000100000051000000E6, offset 2981888
during startup. the address would change sometimes:
LOG:  unexpected pageaddr 50/1F2DA000 in log segment
0000000100000051000000E6, offset 2990080

Basically, what is happening here is that any attempt to connect to
cds database fails with assertion (I can connect to postgres ok).

On Thu, Jan 15, 2015 at 12:47 PM, Peter Geoghegan <pg@heroku.com> wrote:
> Maybe the relevant WAL records are more or less correct, and recovery
> restores the database to its correct state, as opposed to the state it
> was actually in at the time of the crash due to a race condition or
> whatever. Could you possibly send bt_page_items() against block 9 of
> the index after recovery? It probably looks totally sane, but it would
> be nice to be sure, if you happen to still have the data.

Too late: I reindexed it.  going back and reviewing the log however, I
did notice:
ERROR:  root page 3 of index "pg_class_oid_index" has level 0,
expected 1 at character 8

in the log before the reindex was thrown.

merlin



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Geoghegan
Дата:
Сообщение: Re: hung backends stuck in spinlock heavy endless loop
Следующее
От: Heikki Linnakangas
Дата:
Сообщение: Re: XLOG_PARAMETER_CHANGE handling of wal_log_hints