Re: Recent failures in IsolationCheck deadlock-hard

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Recent failures in IsolationCheck deadlock-hard
Дата
Msg-id 22195.1566077308@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Recent failures in IsolationCheck deadlock-hard  (Thomas Munro <thomas.munro@gmail.com>)
Список pgsql-hackers
Thomas Munro <thomas.munro@gmail.com> writes:
> On Tue, Aug 6, 2019 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Yeah, there have been half a dozen failures since deadlock-parallel
>> went in, mostly on critters that are slowed by CLOBBER_CACHE_ALWAYS
>> or valgrind.  I've tried repeatedly to reproduce that here, without
>> success :-(.  It's unclear whether the failures represent a real
>> code bug or just a problem in the test case, so I don't really want
>> to speculate about fixes till I can reproduce it.

> I managed to reproduce a failure that looks a lot like lousyjack's
> (note that there are two slightly different failure modes):
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2019-08-05%2011:33:02

> I did that by changing the deadlock_timeout values for sessions d1 and
> d2 to just a few milliseconds on my slowest computer, guessing that
> this might be a race involving the deadlock timeout and the time it
> takes for workers to fork and join a lock queue.

Yeah, I eventually managed to reproduce it (not too reliably) by
introducing a randomized delay into parallel worker startup.

The scenario seems to be: some d1a2 worker arrives so late that it's not
accounted for in the initial DeadLockCheck performed by some d2a1 worker.
The other d1a2 workers are released, and run and finish, but the late one
goes to sleep, with a long deadlock_timeout.  If the next DeadLockCheck is
run by e1l's worker, that prefers to release d2a1 workers, which then all
run to completion.  When the late d1a2 worker finally wakes up and runs
DeadLockCheck, *there is no deadlock to resolve*: the d2 session is idle,
not waiting for any lock.  So the worker goes back to sleep, and we sit
till isolationtester times out.

Another way to look at it is that there is a deadlock condition, but
one of the waits-for constraints is on the client side where DeadLockCheck
can't see it.  isolationtester is waiting for d1a2 to complete before it
will execute d1c which would release session d2, so that d2 is effectively
waiting for d1, but DeadLockCheck doesn't know that and thinks that it's
equally good to unblock either d1 or d2.

The attached proposed patch resolves this by introducing another lock
that is held by d1 and then d2 tries to take it, ensuring that the
deadlock detector will recognize that d1 must be released.

I've run several thousand iterations of the test this way without a
problem, where before the MTBF was maybe a hundred or two iterations
with the variable startup delay active.  So I think this fix is good,
but I could be wrong.  One notable thing is that every so often the
test takes ~10s to complete instead of a couple hundred msec.  I think
that what's happening there is that the last deadlock condition doesn't
form until after all of session d2's DeadLockChecks have run, meaning
that we don't spot the deadlock until some other session runs it.  The
test still passes, though.  This is probably fine given that it would
never happen except with platforms that are horridly slow anyway.
Possibly we could shorten the 10s values to make that case complete
quicker, but I'm afraid of maybe breaking things on slow machines.

> Another thing I noticed is that all 4 times I managed to reproduce
> this, the "rearranged to" queue had only two entries; I can understand
> that d1's workers might not feature yet due to bad timing, but it's
> not clear to me why there should always be only one d2a1 worker and
> not more.

I noticed that too, and eventually realized that it's a
max_worker_processes constraint: we have two parallel workers waiting
in e1l and e2l, so if d1a2 takes four, there are only two slots left for
d2a1; and for reasons that aren't totally clear, we don't get to use the
last slot.  (Not sure if that's a bug in itself.)

The attached patch therefore also knocks max_parallel_workers_per_gather
down to 3 in this test, so that we have room for at least 2 d2a1 workers.

            regards, tom lane

diff --git a/src/test/isolation/expected/deadlock-parallel.out b/src/test/isolation/expected/deadlock-parallel.out
index 871a80c..cf4d07e 100644
--- a/src/test/isolation/expected/deadlock-parallel.out
+++ b/src/test/isolation/expected/deadlock-parallel.out
@@ -1,10 +1,10 @@
 Parsed test spec with 4 sessions

 starting permutation: d1a1 d2a2 e1l e2l d1a2 d2a1 d1c e1c d2c e2c
-step d1a1: SELECT lock_share(1,x) FROM bigt LIMIT 1;
-lock_share
+step d1a1: SELECT lock_share(1,x), lock_excl(3,x) FROM bigt LIMIT 1;
+lock_share     lock_excl

-1
+1              1
 step d2a2: select lock_share(2,x) FROM bigt LIMIT 1;
 lock_share

@@ -16,15 +16,19 @@ step d1a2: SET force_parallel_mode = on;
               SET parallel_tuple_cost = 0;
               SET min_parallel_table_scan_size = 0;
               SET parallel_leader_participation = off;
-              SET max_parallel_workers_per_gather = 4;
+              SET max_parallel_workers_per_gather = 3;
               SELECT sum(lock_share(2,x)) FROM bigt; <waiting ...>
 step d2a1: SET force_parallel_mode = on;
               SET parallel_setup_cost = 0;
               SET parallel_tuple_cost = 0;
               SET min_parallel_table_scan_size = 0;
               SET parallel_leader_participation = off;
-              SET max_parallel_workers_per_gather = 4;
-              SELECT sum(lock_share(1,x)) FROM bigt; <waiting ...>
+              SET max_parallel_workers_per_gather = 3;
+              SELECT sum(lock_share(1,x)) FROM bigt;
+              SET force_parallel_mode = off;
+              RESET parallel_setup_cost;
+              RESET parallel_tuple_cost;
+              SELECT lock_share(3,x) FROM bigt LIMIT 1; <waiting ...>
 step d1a2: <... completed>
 sum

@@ -38,6 +42,9 @@ step d2a1: <... completed>
 sum

 10000
+lock_share
+
+1
 step e1c: COMMIT;
 step d2c: COMMIT;
 step e2l: <... completed>
diff --git a/src/test/isolation/specs/deadlock-parallel.spec b/src/test/isolation/specs/deadlock-parallel.spec
index aa4a084..7ad290c 100644
--- a/src/test/isolation/specs/deadlock-parallel.spec
+++ b/src/test/isolation/specs/deadlock-parallel.spec
@@ -15,6 +15,25 @@
 # The deadlock detector resolves the deadlock by reversing the d1-e2 edge,
 # unblocking d1.

+# However ... it's not actually that well-defined whether the deadlock
+# detector will prefer to unblock d1 or d2.  It depends on which backend
+# is first to run DeadLockCheck after the deadlock condition is created:
+# that backend will search outwards from its own wait condition, and will
+# first find a loop involving the *other* lock.  We encourage that to be
+# one of the d2a1 parallel workers, which will therefore unblock d1a2
+# workers, by setting a shorter deadlock_timeout in session d2.  But on
+# slow machines, one or more d1a2 workers may not yet have reached their
+# lock waits, so that they're not unblocked by the first DeadLockCheck.
+# The next DeadLockCheck may choose to unblock the d2a1 workers instead,
+# which would allow d2a1 to complete before d1a2, causing the test to
+# freeze up because isolationtester isn't expecting that completion order.
+# (In effect, we have an undetectable deadlock because d2 is waiting for
+# d1's completion, but on the client side.)  To fix this, introduce an
+# additional lock (advisory lock 3), which is initially taken by d1 and
+# then d2a1 will wait for it after completing the main part of the test.
+# In this way, the deadlock detector can see that d1 must be completed
+# first, regardless of timing.
+
 setup
 {
   create function lock_share(int,int) returns int language sql as
@@ -39,15 +58,15 @@ setup        { BEGIN isolation level repeatable read;
               SET force_parallel_mode = off;
               SET deadlock_timeout = '10s';
 }
-# this lock will be taken in the leader, so it will persist:
-step "d1a1"    { SELECT lock_share(1,x) FROM bigt LIMIT 1; }
+# these locks will be taken in the leader, so they will persist:
+step "d1a1"    { SELECT lock_share(1,x), lock_excl(3,x) FROM bigt LIMIT 1; }
 # this causes all the parallel workers to take locks:
 step "d1a2"    { SET force_parallel_mode = on;
               SET parallel_setup_cost = 0;
               SET parallel_tuple_cost = 0;
               SET min_parallel_table_scan_size = 0;
               SET parallel_leader_participation = off;
-              SET max_parallel_workers_per_gather = 4;
+              SET max_parallel_workers_per_gather = 3;
               SELECT sum(lock_share(2,x)) FROM bigt; }
 step "d1c"    { COMMIT; }

@@ -58,14 +77,19 @@ setup        { BEGIN isolation level repeatable read;
 }
 # this lock will be taken in the leader, so it will persist:
 step "d2a2"    { select lock_share(2,x) FROM bigt LIMIT 1; }
-# this causes all the parallel workers to take locks:
+# this causes all the parallel workers to take locks;
+# after which, make the leader take lock 3 to prevent client-driven deadlock
 step "d2a1"    { SET force_parallel_mode = on;
               SET parallel_setup_cost = 0;
               SET parallel_tuple_cost = 0;
               SET min_parallel_table_scan_size = 0;
               SET parallel_leader_participation = off;
-              SET max_parallel_workers_per_gather = 4;
-              SELECT sum(lock_share(1,x)) FROM bigt; }
+              SET max_parallel_workers_per_gather = 3;
+              SELECT sum(lock_share(1,x)) FROM bigt;
+              SET force_parallel_mode = off;
+              RESET parallel_setup_cost;
+              RESET parallel_tuple_cost;
+              SELECT lock_share(3,x) FROM bigt LIMIT 1; }
 step "d2c"    { COMMIT; }

 session "e1"

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: max_parallel_workers can't actually be set?
Следующее
От: Sehrope Sarkuni
Дата:
Сообщение: Re: [Proposal] Table-level Transparent Data Encryption (TDE) and KeyManagement Service (KMS)