Обсуждение: Re: ANALYZE locks pg_listener in EXCLUSIVE for long
At 02:54 PM 3/05/2004, Tom Lane wrote: >Please dig deeper. I may have found the problem; all the hung processes show 'async_notify waiting' in ps, and the ANALYZE eventually dies with 'tuple concurrently updated'. The routine 'ProcessIncomingNotify' in async.c does indeed try to lock pg_listener (even if we're not using NOTIFY/LISTEN). Not sure why the ANALYZE is locking the relation, though...but it is locked in AccessShareLock. I can send a log of my investigations if necessary. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp.mit.edu:11371 |/
At 06:21 PM 3/05/2004, Philip Warner wrote: >'tuple concurrently updated' The database logs show the same error in each case where a long delay has occurred. And before anyone suggests it, we already have processes in place to prevent to ANALYZEs running at the same time. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes: > I may have found the problem; all the hung processes show 'async_notify > waiting' in ps, and the ANALYZE eventually dies with 'tuple concurrently > updated'. > The routine 'ProcessIncomingNotify' in async.c does indeed try to lock > pg_listener (even if we're not using NOTIFY/LISTEN). Not sure why the > ANALYZE is locking the relation, though...but it is locked in AccessShareLock. Hm. What seems likely to have happened is that the sinval message queue got full. We currently deal with this by sending SIGUSR2 to all backends, which forces them through a NOTIFY-check cycle; a byproduct of the transaction start is to read pending sinval messages. (This is somebody's ugly quick hack from years ago; we really oughta find a less expensive way of doing it.) That would have left all the idle backends trying to get exclusive lock on pg_listener, and if the ANALYZE subsequently reached pg_listener, its share lock would queue up behind those requests. What is not clear yet is why *all* of them are blocked. Seems something else must have some kind of lock already on pg_listener; but who? Can you get a dump of the pg_locks view while this is happening? > And before anyone suggests it, we already have processes in place > to prevent to ANALYZEs running at the same time. How confident are you in those "processes"? I don't know of any other mechanism for 'tuple concurrently updated' failures in ANALYZE than concurrent analyze runs ... regards, tom lane
At 11:04 PM 3/05/2004, Tom Lane wrote: >Hm. What seems likely to have happened is that the sinval message queue >got full. I agree (our emails crossed). >That would have left all the idle backends trying to get exclusive lock >on pg_listener, and if the ANALYZE subsequently reached pg_listener, its >share lock would queue up behind those requests. What I see is that the ANALYZE job already has it in ACCESS SHARED mode, and keeps the lock until it dies with the 'concurrent update' error. >What is not clear yet is why *all* of them are blocked. Seems something >else must have some kind of lock already on pg_listener; but who? ANALYZE. >Can you get a dump of the pg_locks view while this is happening? Attached. >How confident are you in those "processes"? I don't know of any other >mechanism for 'tuple concurrently updated' failures in ANALYZE than >concurrent analyze runs ... Fairly. In this particular instance the error was probably caused bu a manually run VACUUM (part of me stressing it to encourage the error). Contrary to my other email, we haven't had the 'tuple concurrently updated' error since March (until today, with me messing around). What I do have is minute-by-minute dumps of pg_locks and ps for the day. At each hang there were many processes in 'async_notify waiting' and an ANALYZE job had the lock in shared mode. I do not have minute-by-minute logs for more than today, but there were 3 hangs today, and only one with the concurrent update error. It would be interesting if we could find a piece of backend code that did a 'select * from pg_listener', and hence locked it in ACCESS SHARED. At the moment, it looks like either the ANALYZE is triggering an error that causes it's backend to read pg_listeners, or it is dying while ANALYZING pg_listeners. The latter seems unlikely since it hangs frequently, and pg_listeners is empty. Does ANALYZE rollback if it dies? Could this account for the delay? ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp.mit.edu:11371 |/
Вложения
Philip Warner <pjw@rhyme.com.au> writes: > At 11:04 PM 3/05/2004, Tom Lane wrote: >> How confident are you in those "processes"? I don't know of any other >> mechanism for 'tuple concurrently updated' failures in ANALYZE than >> concurrent analyze runs ... > Fairly. In this particular instance the error was probably caused bu a > manually run VACUUM (part of me stressing it to encourage the error). Yeah, I see a process 14295 in your dump that seems to be trying to ANALYZE (at least, it's got write lock on pg_statistic...). 8631 is the incumbent ANALYZE, and it's got locks all over the place :-( I think what we have here is an evil side-effect of the change a couple versions back to allow standalone ANALYZE to run as a single transaction. A database-wide ANALYZE will therefore accumulate AccessShareLock on each table it touches, and it won't release these locks until commit. So the scenario goes like this: 1. Somewhere relatively early in its run, ANALYZE processes pg_statistic. So it's now holding AccessShareLock on pg_statistic. 2. As the ANALYZE proceeds, it issues sinval messages due to the updates it's making in pg_statistic. This is normal. There will be (at least) one such message per column analyzed, and it sounds like your database has many columns. Plus of course other catalog updates could be occurring in other backends. 3. There is at least one other backend sitting idle and therefore not reading sinval messages. So eventually the sinval queue gets 70% full and the SIGUSR2 escape-hatch is triggered. 4. The idle backends (and eventually non-idle ones, too, whenever they next reach the idle loop) try to do the NOTIFY thing, and get blocked trying to acquire AccessExclusiveLock on pg_listener. They will now be stuck until the ANALYZE completes. As a quick-hack fix, I think it would do to reduce the locks taken on pg_listener in async.c from AccessExclusiveLock to ExclusiveLock. This would serve the purpose of serializing async.c processing without creating a conflict against ANALYZE's AccessShareLock. Some other things we ought to think about for the future: * Is it really a good idea for database-wide ANALYZE to run as a single transaction? Holding all those locks is a recipe for deadlocks, even if they're as inoffensive as AccessShareLocks normally are. * Can we use something less heavyweight than ProcessIncomingNotify to deal with the sinval-clearing problem? Not only is that routine expensive, but it is a serialization bottleneck, which is exactly what we *don't* want in something that all the backends are getting told to do at the same time. I think the original motivation for that hack was because we didn't have a spare signal number available to dedicate to sinval response, but SIGUSR1 has been free for a couple releases now. I'm very tempted to commandeer it for sinval. regards, tom lane
I wrote: > 2. As the ANALYZE proceeds, it issues sinval messages due to the updates > it's making in pg_statistic. This is normal. Small correction: actually, backends only send sinval messages at commit, so the ANALYZE will just be accumulating pending messages in its private memory. Your observed symptom therefore can only occur if other transactions running parallel to the ANALYZE perform sufficient catalog updating activity to fill the sinval message queue. And there must also be at least one long-term-idle backend, so that the queue doesn't get drained. I had been wondering why we'd not identified this problem before, but that combination of factors is probably unusual enough to explain why not. regards, tom lane
At 01:30 AM 4/05/2004, Tom Lane wrote: >can only occur if other >transactions running parallel to the ANALYZE perform sufficient catalog >updating activity to fill the sinval message queue. And there must also >be at least one long-term-idle backend, so that the queue doesn't get >drained. Sounds quite likely; usually seems to occur at 'shoulder' load times; lots of updates still happening (several each second) and a server process pool that is larger than necessary to handle the load. I'll replace all: heap_openr(ListenerRelationName, AccessExclusiveLock); with heap_openr(ListenerRelationName, ExclusiveLock); and see how it goes. Thanks for the help. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 03 5330 3172 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp.mit.edu:11371 |/