Обсуждение: [PATCH] Improve performance of NOTIFY over many databases (v2)

Поиск
Список
Период
Сортировка

[PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
Hoi hackers,

Here is a reworked version of the previous patches.

The original three patches have been collapsed into one as given the
changes discussed it didn't make sense to keep them separate. There
are now two patches (the third is just to help with testing):

Patch 1: Tracks the listening backends in a list so non-listening
backends can be quickly skipped over. This is separate because it's
orthogonal to the rest of the changes and there are other ways to do
this.

Patch 2: This is the meat of the change. It implements all the
suggestions discussed:

- The queue tail is now only updated lazily, whenever the notify queue
moves to a new page. This did require a new global to track this state
through the transaction commit, but it seems worth it.

- Only backends for the current database are signalled when a
notification is made

- Slow backends are woken up one at a time rather than all at once

- A backend is allowed to lag up to 4 SLRU pages behind before being
signalled. This is a tradeoff between how often to get woken up verses
how much work to do once woken up.

- All the relevant comments have been updated to describe the new
algorithm. Locking should also be correct now.

This means in the normal case where listening backends get a
notification occasionally, no-one will ever be considered slow. An
exclusive lock for cleanup will happen about once per SLRU page.
There's still the exclusive locks on adding notifications but that's
unavoidable.

One minor issue is that pg_notification_queue_usage() will now return
a small but non-zero number (about 3e-6) even when nothing is really
going on. This could be fixed by having it take an exclusive lock
instead and updating to the latest values but that barely seems worth
it.

Performance-wise it's even better than my original patches, with about
20-25% reduction in CPU usage in my test setup (using the test script
sent previously).

Here is the log output from my postgres, where you see the signalling in action:

------
16:42:48.673 [10188] martijn@test_131 DEBUG:  PreCommit_Notify
16:42:48.673 [10188] martijn@test_131 DEBUG:  NOTIFY QUEUE = (74,896)...(79,0)
16:42:48.673 [10188] martijn@test_131 DEBUG:  backendTryAdvanceTail -> true
16:42:48.673 [10188] martijn@test_131 DEBUG:  AtCommit_Notify
16:42:48.673 [10188] martijn@test_131 DEBUG:  ProcessCompletedNotifies
16:42:48.673 [10188] martijn@test_131 DEBUG:  backendTryAdvanceTail -> false
16:42:48.673 [10188] martijn@test_131 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10188] martijn@test_131 DEBUG:  waking backend 137 (pid 10055)
16:42:48.673 [10055] martijn@test_067 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10187] martijn@test_131 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10055] martijn@test_067 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10055] martijn@test_067 DEBUG:  waking backend 138 (pid 10056)
16:42:48.673 [10187] martijn@test_131 DEBUG:  ProcessIncomingNotify: done
16:42:48.673 [10055] martijn@test_067 DEBUG:  ProcessIncomingNotify: done
16:42:48.673 [10056] martijn@test_067 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10056] martijn@test_067 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10056] martijn@test_067 DEBUG:  ProcessIncomingNotify: done
16:42:48.683 [9991] martijn@test_042 DEBUG:  Async_Notify(changes)
16:42:48.683 [9991] martijn@test_042 DEBUG:  PreCommit_Notify
16:42:48.683 [9991] martijn@test_042 DEBUG:  NOTIFY QUEUE = (75,7744)...(79,32)
16:42:48.683 [9991] martijn@test_042 DEBUG:  AtCommit_Notify
-----

Have a nice weekend.
-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/

Вложения

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> The original three patches have been collapsed into one as given the
> changes discussed it didn't make sense to keep them separate. There
> are now two patches (the third is just to help with testing):

> Patch 1: Tracks the listening backends in a list so non-listening
> backends can be quickly skipped over. This is separate because it's
> orthogonal to the rest of the changes and there are other ways to do
> this.

> Patch 2: This is the meat of the change. It implements all the
> suggestions discussed:

I pushed 0001 after doing some hacking on it --- it was sloppy about
datatypes, and about whether the invalid-entry value is 0 or -1,
and it was just wrong about keeping the list in backendid order.
(You can't conditionally skip looking for where to put the new
entry, if you want to maintain the order.  I thought about just
defining the list as unordered, which would simplify joining the
list initially, but that could get pretty cache-unfriendly when
there are lots of entries.)

0002 is now going to need a rebase, so please do that.

            regards, tom lane



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
Hoi Tom,


On Wed, 11 Sep 2019 at 00:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I pushed 0001 after doing some hacking on it --- it was sloppy about
datatypes, and about whether the invalid-entry value is 0 or -1,
and it was just wrong about keeping the list in backendid order.
(You can't conditionally skip looking for where to put the new
entry, if you want to maintain the order.  I thought about just
defining the list as unordered, which would simplify joining the
list initially, but that could get pretty cache-unfriendly when
there are lots of entries.)

0002 is now going to need a rebase, so please do that.


Thanks for this, and good catch. Looks like I didn't test the first patch by itself very well.

Here is the rebased second patch.

Thanks in advance,
--
Вложения

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> Here is the rebased second patch.

This throws multiple compiler warnings for me:

async.c: In function 'asyncQueueUnregister':
async.c:1293: warning: unused variable 'advanceTail'
async.c: In function 'asyncQueueAdvanceTail':
async.c:2153: warning: 'slowbackendpid' may be used uninitialized in this function

Also, I don't exactly believe this bit:

+            /* If we are advancing to a new page, remember this so after the
+             * transaction commits we can attempt to advance the tail
+             * pointer, see ProcessCompletedNotifies() */
+            if (QUEUE_POS_OFFSET(QUEUE_HEAD) == 0)
+                backendTryAdvanceTail = true;

It seems unlikely that insertion would stop exactly at a page boundary,
but that seems to be what this is looking for.

But, really ... do we need the backendTryAdvanceTail flag at all?
I'm dubious, because it seems like asyncQueueReadAllNotifications
would have already covered the case if we're listening.  If we're
not listening, but we signalled some other listeners, it falls
to them to kick us if we're the slowest backend.  If we're not the
slowest backend then doing asyncQueueAdvanceTail isn't useful.

I agree with getting rid of the asyncQueueAdvanceTail call in
asyncQueueUnregister; on reflection doing that there seems pretty unsafe,
because we're not necessarily in a transaction and hence anything that
could possibly error is a bad idea.  However, it'd be good to add a
comment explaining that we're not doing that and why it's ok not to.

I'm fairly unimpressed with the "kick a random slow backend" logic.
There can be no point in kicking any but the slowest backend, ie
one whose pointer is exactly the oldest.  Since we're already computing
the min pointer in that loop, it would actually take *less* logic inside
the loop to remember the/a backend that had that pointer value, and then
decide afterwards whether it's slow enough to merit a kick.

            regards, tom lane



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
Hoi Tom,


On Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> This throws multiple compiler warnings for me:

Fixed.

> Also, I don't exactly believe this bit:
[snip]
> It seems unlikely that insertion would stop exactly at a page boundary,
> but that seems to be what this is looking for.

This is how asyncQueueAddEntries() works. Entries are never split over
pages. If there is not enough room, then it advances to the beginning
of the next page and returns. Hence here the offset is zero. I could
set the global inside asyncQueueAddEntries() but that seems icky.
Another alternative is to have asyncQueueAddEntries() return a boolean
"moved to new page", but that's just a long-winded way of doing what
it is now.

> But, really ... do we need the backendTryAdvanceTail flag at all?
> I'm dubious, because it seems like asyncQueueReadAllNotifications
> would have already covered the case if we're listening.  If we're
> not listening, but we signalled some other listeners, it falls
> to them to kick us if we're the slowest backend.  If we're not the
> slowest backend then doing asyncQueueAdvanceTail isn't useful.

There are multiple issues here. asyncQueueReadAllNotifications() is
going to be called by each listener simultaneously, so each listener
is going to come to the same conclusion. On the other side, there is
no guarantee we wake up anyone as a result of the NOTIFY, e.g. if
there are no listeners in the current database. To be sure you try to
advance the tail, you have to trigger on the sending side. The global
is there because at the point we are inserting entries we are still in
a user transaction, potentially holding many table locks (the issue we
were running into in the first place). By setting
backendTryAdvanceTail we can move the work to
ProcessCompletedNotifies() which is after the transaction has
committed and the locks released.

> I agree with getting rid of the asyncQueueAdvanceTail call in
> asyncQueueUnregister; on reflection doing that there seems pretty unsafe,
> because we're not necessarily in a transaction and hence anything that
> could possibly error is a bad idea.  However, it'd be good to add a
> comment explaining that we're not doing that and why it's ok not to.

Comment added.

> I'm fairly unimpressed with the "kick a random slow backend" logic.
> There can be no point in kicking any but the slowest backend, ie
> one whose pointer is exactly the oldest.  Since we're already computing
> the min pointer in that loop, it would actually take *less* logic inside
> the loop to remember the/a backend that had that pointer value, and then
> decide afterwards whether it's slow enough to merit a kick.

Adjusted this. I'm not sure it's actually clearer this way, but it is
less work inside the loop. A small change is that now it won't signal
anyone if this backend is the slowest, which more correct.

Thanks for the feedback. Attached is version 3.

Have a nice weekend,
-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/

Вложения

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> On Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> But, really ... do we need the backendTryAdvanceTail flag at all?

> There are multiple issues here. asyncQueueReadAllNotifications() is
> going to be called by each listener simultaneously, so each listener
> is going to come to the same conclusion. On the other side, there is
> no guarantee we wake up anyone as a result of the NOTIFY, e.g. if
> there are no listeners in the current database. To be sure you try to
> advance the tail, you have to trigger on the sending side. The global
> is there because at the point we are inserting entries we are still in
> a user transaction, potentially holding many table locks (the issue we
> were running into in the first place). By setting
> backendTryAdvanceTail we can move the work to
> ProcessCompletedNotifies() which is after the transaction has
> committed and the locks released.

None of this seems to respond to my point: it looks to me like it would
work fine if you simply dropped the patch's additions in PreCommit_Notify
and ProcessCompletedNotifies, because there is already enough logic to
decide when to call asyncQueueAdvanceTail.  In particular, the result from
Signal[MyDB]Backends tells us whether anyone else was awakened, and
ProcessCompletedNotifies already does asyncQueueAdvanceTail if not.
As long as we did awaken someone, the ball's now in their court to
make sure asyncQueueAdvanceTail happens eventually.

There are corner cases where someone else might get signaled but never
do asyncQueueAdvanceTail -- for example, if they're in process of exiting
--- but I think the whole point of this patch is that we don't care too
much if that occasionally fails to happen.  If there's a continuing
stream of NOTIFY activity, asyncQueueAdvanceTail will happen often
enough to ensure that the queue storage doesn't bloat unreasonably.

            regards, tom lane



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
On Sat, 14 Sep 2019 at 17:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Martijn van Oosterhout <kleptog@gmail.com> writes:
> > On Fri, 13 Sep 2019 at 22:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >> But, really ... do we need the backendTryAdvanceTail flag at all?

> None of this seems to respond to my point: it looks to me like it would
> work fine if you simply dropped the patch's additions in PreCommit_Notify
> and ProcessCompletedNotifies, because there is already enough logic to
> decide when to call asyncQueueAdvanceTail.  In particular, the result from
> Signal[MyDB]Backends tells us whether anyone else was awakened, and
> ProcessCompletedNotifies already does asyncQueueAdvanceTail if not.
> As long as we did awaken someone, the ball's now in their court to
> make sure asyncQueueAdvanceTail happens eventually.

Ah, I think I see what you're getting at. As written,
asyncQueueReadAllNotifications() only calls asyncQueueAdvanceTail() if
*it* was a slow backend (advanceTail =
QUEUE_SLOW_BACKEND(MyBackendId)). In a situation where some databases
are regularly using NOTIFY and a few others never (but still
listening) it will lead to the situation where the tail never gets
advanced.

However, I guess you're thinking of asyncQueueReadAllNotifications()
triggering if the queue as a whole was too long. This could in
principle work but it does mean that at some point all backends
sending NOTIFY are going to start calling asyncQueueAdvanceTail()
every time, until the tail gets advanced, and if there are many idle
listening backends behind this could take a while. The slowest backend
might receive more signals while it is processing and so end up
running asyncQueueAdvanceTail() twice. The fact that signals coalesce
stops the process getting completely out of hand but it does feel a
little uncontrolled.

The whole point of this patch is to ensure that at any time only one
backend is being woken up and calling asyncQueueAdvanceTail() at a
time.

But you do point out that the use of the return value of
SignalMyDBBackends() is used wrongly. The fact that no-one got
signalled only meant there were no other listeners on this database
which means nothing in terms of global queue cleanup. What you want to
know is if you're the only listener in the whole system and you can
test for that directly (QUEUE_FIRST_BACKEND == MyBackendId &&
QUEUE_NEXT_BACKEND(MyBackendId) == InvalidBackendId). I can adjust
this in the next version if necessary, it's fairly harmless as is as
it only triggers in the case where a database is only notifying
itself, which probably isn't that common.

I hope I have correctly understood this time.

Have a nice weekend.
-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> On Sat, 14 Sep 2019 at 17:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> None of this seems to respond to my point: it looks to me like it would
>> work fine if you simply dropped the patch's additions in PreCommit_Notify
>> and ProcessCompletedNotifies, because there is already enough logic to
>> decide when to call asyncQueueAdvanceTail.

> ...
> However, I guess you're thinking of asyncQueueReadAllNotifications()
> triggering if the queue as a whole was too long. This could in
> principle work but it does mean that at some point all backends
> sending NOTIFY are going to start calling asyncQueueAdvanceTail()
> every time, until the tail gets advanced, and if there are many idle
> listening backends behind this could take a while. The slowest backend
> might receive more signals while it is processing and so end up
> running asyncQueueAdvanceTail() twice. The fact that signals coalesce
> stops the process getting completely out of hand but it does feel a
> little uncontrolled.
> The whole point of this patch is to ensure that at any time only one
> backend is being woken up and calling asyncQueueAdvanceTail() at a
> time.

I spent some more time thinking about this, and I'm still not too
satisfied with this patch's approach.  It seems to me the key insights
we're trying to make use of are:

1. We don't really need to keep the global tail pointer exactly
up to date.  It's bad if it falls way behind, but a few pages back
is fine.

2. When sending notifies, only listening backends connected to our
own database need be awakened immediately.  Backends connected to
other DBs will need to advance their queue pointer sometime, but
again it doesn't need to be right away.

3. It's bad for multiple processes to all be trying to do
asyncQueueAdvanceTail concurrently: they'll contend for exclusive
access to the AsyncQueueLock.  Therefore, having the listeners
do it is really the wrong thing, and instead we should do it on
the sending side.

However, the patch as presented doesn't go all the way on point 3,
instead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail
in asyncQueueReadAllNotifications.  I propose that we should go all
the way and just define tail-advancing as something that happens on
the sending side, and only once every few pages.  I also think we
can simplify the handling of other-database listeners by including
them in the set signaled by SignalBackends, but only if they're
several pages behind.  So that leads me to the attached patch;
what do you think?

BTW, in my hands it seems like point 2 (skip wakening other-database
listeners) is the only really significant win here, and of course
that only wins when the notify traffic is spread across a fair number
of databases.  Which I fear is not the typical use-case.  In single-DB
use-cases, point 2 helps not at all.  I had a really hard time measuring
any benefit from point 3 --- I eventually saw a noticeable savings
when I tried having one notifier and 100 listen-only backends, but
again that doesn't seem like a typical use-case.  I could not replicate
your report of lots of time spent in asyncQueueAdvanceTail's lock
acquisition.  I wonder whether you're using a very large max_connections
setting and we already fixed most of the problem with that in bca6e6435.
Still, this patch doesn't seem to make any cases worse, so I don't mind
if it's just improving unusual use-cases.

            regards, tom lane

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f26269b..7791f78 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -75,8 +75,10 @@
  *      list of listening backends and send a PROCSIG_NOTIFY_INTERRUPT signal
  *      to every listening backend (we don't know which backend is listening on
  *      which channel so we must signal them all). We can exclude backends that
- *      are already up to date, though.  We don't bother with a self-signal
- *      either, but just process the queue directly.
+ *      are already up to date, though, and we can also exclude backends that
+ *      are in other databases (unless they are way behind and should be kicked
+ *      to make them advance their pointers).  We don't bother with a
+ *      self-signal either, but just process the queue directly.
  *
  * 5. Upon receipt of a PROCSIG_NOTIFY_INTERRUPT signal, the signal handler
  *      sets the process's latch, which triggers the event to be processed
@@ -89,13 +91,14 @@
  *      Inbound-notify processing consists of reading all of the notifications
  *      that have arrived since scanning last time. We read every notification
  *      until we reach either a notification from an uncommitted transaction or
- *      the head pointer's position. Then we check if we were the laziest
- *      backend: if our pointer is set to the same position as the global tail
- *      pointer is set, then we move the global tail pointer ahead to where the
- *      second-laziest backend is (in general, we take the MIN of the current
- *      head position and all active backends' new tail pointers). Whenever we
- *      move the global tail pointer we also truncate now-unused pages (i.e.,
- *      delete files in pg_notify/ that are no longer used).
+ *      the head pointer's position.
+ *
+ * 6. To avoid SLRU wraparound and limit disk space consumption, the tail
+ *      pointer needs to be advanced so that old pages can be truncated.
+ *      This is relatively expensive (notably, it requires an exclusive lock),
+ *      so we don't want to do it often.  We make sending backends do this work
+ *      if they advanced the queue head into a new page, but only once every
+ *      QUEUE_CLEANUP_DELAY pages.
  *
  * An application that listens on the same channel it notifies will get
  * NOTIFY messages for its own NOTIFYs.  These can be ignored, if not useful,
@@ -212,6 +215,19 @@ typedef struct QueuePosition
      (x).offset > (y).offset ? (x) : (y))

 /*
+ * Parameter determining how often we try to advance the tail pointer:
+ * we do that after every QUEUE_CLEANUP_DELAY pages of NOTIFY data.  This is
+ * also the distance by which a backend in another database needs to be
+ * behind before we'll decide we need to wake it up to advance its pointer.
+ *
+ * Resist the temptation to make this really large.  While that would save
+ * work in some places, it would add cost in others.  In particular, this
+ * should likely be less than NUM_ASYNC_BUFFERS, to ensure that backends
+ * catch up before the pages they'll need to read fall out of SLRU cache.
+ */
+#define QUEUE_CLEANUP_DELAY 4
+
+/*
  * Struct describing a listening backend's status
  */
 typedef struct QueueBackendStatus
@@ -252,8 +268,8 @@ typedef struct QueueBackendStatus
 typedef struct AsyncQueueControl
 {
     QueuePosition head;            /* head points to the next free location */
-    QueuePosition tail;            /* the global tail is equivalent to the pos of
-                                 * the "slowest" backend */
+    QueuePosition tail;            /* tail must be <= the queue position of every
+                                 * listening backend */
     BackendId    firstListener;    /* id of first listener, or InvalidBackendId */
     TimestampTz lastQueueFillWarn;    /* time of last queue-full msg */
     QueueBackendStatus backend[FLEXIBLE_ARRAY_MEMBER];
@@ -402,10 +418,14 @@ static bool amRegisteredListener = false;
 /* has this backend sent notifications in the current transaction? */
 static bool backendHasSentNotifications = false;

+/* have we advanced to a page that's a multiple of QUEUE_CLEANUP_DELAY? */
+static bool backendTryAdvanceTail = false;
+
 /* GUC parameter */
 bool        Trace_notify = false;

 /* local function prototypes */
+static int    asyncQueuePageDiff(int p, int q);
 static bool asyncQueuePagePrecedes(int p, int q);
 static void queue_listen(ListenActionKind action, const char *channel);
 static void Async_UnlistenOnExit(int code, Datum arg);
@@ -421,7 +441,7 @@ static void asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe);
 static ListCell *asyncQueueAddEntries(ListCell *nextNotify);
 static double asyncQueueUsage(void);
 static void asyncQueueFillWarning(void);
-static bool SignalBackends(void);
+static void SignalBackends(void);
 static void asyncQueueReadAllNotifications(void);
 static bool asyncQueueProcessPageEntries(volatile QueuePosition *current,
                                          QueuePosition stop,
@@ -436,10 +456,11 @@ static int    notification_match(const void *key1, const void *key2, Size keysize);
 static void ClearPendingActionsAndNotifies(void);

 /*
- * We will work on the page range of 0..QUEUE_MAX_PAGE.
+ * Compute the difference between two queue page numbers (i.e., p - q),
+ * accounting for wraparound.
  */
-static bool
-asyncQueuePagePrecedes(int p, int q)
+static int
+asyncQueuePageDiff(int p, int q)
 {
     int            diff;

@@ -455,7 +476,14 @@ asyncQueuePagePrecedes(int p, int q)
         diff -= QUEUE_MAX_PAGE + 1;
     else if (diff < -((QUEUE_MAX_PAGE + 1) / 2))
         diff += QUEUE_MAX_PAGE + 1;
-    return diff < 0;
+    return diff;
+}
+
+/* Is p < q, accounting for wraparound? */
+static bool
+asyncQueuePagePrecedes(int p, int q)
+{
+    return asyncQueuePageDiff(p, q) < 0;
 }

 /*
@@ -1051,8 +1079,6 @@ Exec_ListenPreCommit(void)
      * notification to the frontend.  Also, although our transaction might
      * have executed NOTIFY, those message(s) aren't queued yet so we can't
      * see them in the queue.
-     *
-     * This will also advance the global tail pointer if possible.
      */
     if (!QUEUE_POS_EQUAL(max, head))
         asyncQueueReadAllNotifications();
@@ -1156,7 +1182,6 @@ void
 ProcessCompletedNotifies(void)
 {
     MemoryContext caller_context;
-    bool        signalled;

     /* Nothing to do if we didn't send any notifications */
     if (!backendHasSentNotifications)
@@ -1185,23 +1210,20 @@ ProcessCompletedNotifies(void)
     StartTransactionCommand();

     /* Send signals to other backends */
-    signalled = SignalBackends();
+    SignalBackends();

     if (listenChannels != NIL)
     {
         /* Read the queue ourselves, and send relevant stuff to the frontend */
         asyncQueueReadAllNotifications();
     }
-    else if (!signalled)
+
+    /*
+     * If it's time to try to advance the global tail pointer, do that.
+     */
+    if (backendTryAdvanceTail)
     {
-        /*
-         * If we found no other listening backends, and we aren't listening
-         * ourselves, then we must execute asyncQueueAdvanceTail to flush the
-         * queue, because ain't nobody else gonna do it.  This prevents queue
-         * overflow when we're sending useless notifies to nobody. (A new
-         * listener could have joined since we looked, but if so this is
-         * harmless.)
-         */
+        backendTryAdvanceTail = false;
         asyncQueueAdvanceTail();
     }

@@ -1242,8 +1264,6 @@ IsListeningOn(const char *channel)
 static void
 asyncQueueUnregister(void)
 {
-    bool        advanceTail;
-
     Assert(listenChannels == NIL);    /* else caller error */

     if (!amRegisteredListener)    /* nothing to do */
@@ -1253,10 +1273,7 @@ asyncQueueUnregister(void)
      * Need exclusive lock here to manipulate list links.
      */
     LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE);
-    /* check if entry is valid and oldest ... */
-    advanceTail = (MyProcPid == QUEUE_BACKEND_PID(MyBackendId)) &&
-        QUEUE_POS_EQUAL(QUEUE_BACKEND_POS(MyBackendId), QUEUE_TAIL);
-    /* ... then mark it invalid */
+    /* Mark our entry as invalid */
     QUEUE_BACKEND_PID(MyBackendId) = InvalidPid;
     QUEUE_BACKEND_DBOID(MyBackendId) = InvalidOid;
     /* and remove it from the list */
@@ -1278,10 +1295,6 @@ asyncQueueUnregister(void)

     /* mark ourselves as no longer listed in the global array */
     amRegisteredListener = false;
-
-    /* If we were the laziest backend, try to advance the tail pointer */
-    if (advanceTail)
-        asyncQueueAdvanceTail();
 }

 /*
@@ -1467,6 +1480,15 @@ asyncQueueAddEntries(ListCell *nextNotify)
              * page without overrunning the queue.
              */
             slotno = SimpleLruZeroPage(AsyncCtl, QUEUE_POS_PAGE(queue_head));
+
+            /*
+             * If the new page address is a multiple of QUEUE_CLEANUP_DELAY,
+             * set flag to remember that we should try to advance the tail
+             * pointer (we don't want to actually do that right here).
+             */
+            if (QUEUE_POS_PAGE(queue_head) % QUEUE_CLEANUP_DELAY == 0)
+                backendTryAdvanceTail = true;
+
             /* And exit the loop */
             break;
         }
@@ -1570,31 +1592,30 @@ asyncQueueFillWarning(void)
 }

 /*
- * Send signals to all listening backends (except our own).
+ * Send signals to listening backends.
  *
- * Returns true if we sent at least one signal.
+ * We never signal our own process; that should be handled by our caller.
  *
- * Since we need EXCLUSIVE lock anyway we also check the position of the other
- * backends and in case one is already up-to-date we don't signal it.
- * This can happen if concurrent notifying transactions have sent a signal and
- * the signaled backend has read the other notifications and ours in the same
- * step.
+ * Normally we signal only backends in our own database, since only those
+ * backends could be interested in notifies we send.  However, if there's
+ * notify traffic in our database but no traffic in another database that
+ * does have listener(s), those listeners will fall further and further
+ * behind.  Waken them anyway if they're far enough behind, so that they'll
+ * advance their queue position pointers, allowing the global tail to advance.
  *
  * Since we know the BackendId and the Pid the signalling is quite cheap.
  */
-static bool
+static void
 SignalBackends(void)
 {
-    bool        signalled = false;
     int32       *pids;
     BackendId  *ids;
     int            count;
-    int32        pid;

     /*
-     * Identify all backends that are listening and not already up-to-date. We
-     * don't want to send signals while holding the AsyncQueueLock, so we just
-     * build a list of target PIDs.
+     * Identify backends that we need to signal.  We don't want to send
+     * signals while holding the AsyncQueueLock, so this loop just builds a
+     * list of target PIDs.
      *
      * XXX in principle these pallocs could fail, which would be bad. Maybe
      * preallocate the arrays?    But in practice this is only run in trivial
@@ -1607,26 +1628,43 @@ SignalBackends(void)
     LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE);
     for (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))
     {
-        pid = QUEUE_BACKEND_PID(i);
+        int32        pid = QUEUE_BACKEND_PID(i);
+        QueuePosition pos;
+
         Assert(pid != InvalidPid);
-        if (pid != MyProcPid)
+        if (pid == MyProcPid)
+            continue;            /* never signal self */
+        pos = QUEUE_BACKEND_POS(i);
+        if (QUEUE_BACKEND_DBOID(i) == MyDatabaseId)
         {
-            QueuePosition pos = QUEUE_BACKEND_POS(i);
-
-            if (!QUEUE_POS_EQUAL(pos, QUEUE_HEAD))
-            {
-                pids[count] = pid;
-                ids[count] = i;
-                count++;
-            }
+            /*
+             * Always signal listeners in our own database, unless they're
+             * already caught up (unlikely, but possible).
+             */
+            if (QUEUE_POS_EQUAL(pos, QUEUE_HEAD))
+                continue;
+        }
+        else
+        {
+            /*
+             * Listeners in other databases should be signaled only if they
+             * are far behind.
+             */
+            if (asyncQueuePageDiff(QUEUE_POS_PAGE(QUEUE_HEAD),
+                                   QUEUE_POS_PAGE(pos)) < QUEUE_CLEANUP_DELAY)
+                continue;
         }
+        /* OK, need to signal this one */
+        pids[count] = pid;
+        ids[count] = i;
+        count++;
     }
     LWLockRelease(AsyncQueueLock);

     /* Now send signals */
     for (int i = 0; i < count; i++)
     {
-        pid = pids[i];
+        int32        pid = pids[i];

         /*
          * Note: assuming things aren't broken, a signal failure here could
@@ -1636,14 +1674,10 @@ SignalBackends(void)
          */
         if (SendProcSignal(pid, PROCSIG_NOTIFY_INTERRUPT, ids[i]) < 0)
             elog(DEBUG3, "could not signal backend with PID %d: %m", pid);
-        else
-            signalled = true;
     }

     pfree(pids);
     pfree(ids);
-
-    return signalled;
 }

 /*
@@ -1844,7 +1878,6 @@ asyncQueueReadAllNotifications(void)
     QueuePosition oldpos;
     QueuePosition head;
     Snapshot    snapshot;
-    bool        advanceTail;

     /* page_buffer must be adequately aligned, so use a union */
     union
@@ -1966,13 +1999,8 @@ asyncQueueReadAllNotifications(void)
         /* Update shared state */
         LWLockAcquire(AsyncQueueLock, LW_SHARED);
         QUEUE_BACKEND_POS(MyBackendId) = pos;
-        advanceTail = QUEUE_POS_EQUAL(oldpos, QUEUE_TAIL);
         LWLockRelease(AsyncQueueLock);

-        /* If we were the laziest backend, try to advance the tail pointer */
-        if (advanceTail)
-            asyncQueueAdvanceTail();
-
         PG_RE_THROW();
     }
     PG_END_TRY();
@@ -1980,13 +2008,8 @@ asyncQueueReadAllNotifications(void)
     /* Update shared state */
     LWLockAcquire(AsyncQueueLock, LW_SHARED);
     QUEUE_BACKEND_POS(MyBackendId) = pos;
-    advanceTail = QUEUE_POS_EQUAL(oldpos, QUEUE_TAIL);
     LWLockRelease(AsyncQueueLock);

-    /* If we were the laziest backend, try to advance the tail pointer */
-    if (advanceTail)
-        asyncQueueAdvanceTail();
-
     /* Done with snapshot */
     UnregisterSnapshot(snapshot);
 }

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
Hoi Tom,

On Mon, 16 Sep 2019 at 00:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> I spent some more time thinking about this, and I'm still not too
> satisfied with this patch's approach.  It seems to me the key insights
> we're trying to make use of are:
>
> 1. We don't really need to keep the global tail pointer exactly
> up to date.  It's bad if it falls way behind, but a few pages back
> is fine.

Agreed.

> 2. When sending notifies, only listening backends connected to our
> own database need be awakened immediately.  Backends connected to
> other DBs will need to advance their queue pointer sometime, but
> again it doesn't need to be right away.

Agreed.

> 3. It's bad for multiple processes to all be trying to do
> asyncQueueAdvanceTail concurrently: they'll contend for exclusive
> access to the AsyncQueueLock.  Therefore, having the listeners
> do it is really the wrong thing, and instead we should do it on
> the sending side.

Agreed, but I'd add that listeners in databases that are largely idle
there may never be a sender, and thus need to be advanced up some
other way.

> However, the patch as presented doesn't go all the way on point 3,
> instead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail
> in asyncQueueReadAllNotifications.  I propose that we should go all
> the way and just define tail-advancing as something that happens on
> the sending side, and only once every few pages.  I also think we
> can simplify the handling of other-database listeners by including
> them in the set signaled by SignalBackends, but only if they're
> several pages behind.  So that leads me to the attached patch;
> what do you think?

I think I like the idea of having SignalBackend do the waking up a
slow backend but I'm not enthused by the "lets wake up (at once)
everyone that is behind". That's one of the issues I was explicitly
trying to solve. If there are any significant number of "slow"
backends then we get the "thundering herd" again. If the number of
slow backends exceeds the number of cores then commits across the
system could be held up quite a while (which is what caused me to make
this patch, multiple seconds was not unusual).

The maybe/maybe not in asyncQueueReadAllNotifications is that "if I
was behind, then I probably got woken up, hence I need to wake up
someone else", thus ensuring the cleanup proceeds in an orderly
fashion, leaving gaps where the lock isn't held allowing COMMITs to
proceed.

> BTW, in my hands it seems like point 2 (skip wakening other-database
> listeners) is the only really significant win here, and of course
> that only wins when the notify traffic is spread across a fair number
> of databases.  Which I fear is not the typical use-case.  In single-DB
> use-cases, point 2 helps not at all.  I had a really hard time measuring
> any benefit from point 3 --- I eventually saw a noticeable savings
> when I tried having one notifier and 100 listen-only backends, but
> again that doesn't seem like a typical use-case.  I could not replicate
> your report of lots of time spent in asyncQueueAdvanceTail's lock
> acquisition.  I wonder whether you're using a very large max_connections
> setting and we already fixed most of the problem with that in bca6e6435.
> Still, this patch doesn't seem to make any cases worse, so I don't mind
> if it's just improving unusual use-cases.

I'm not sure if it's an unusual use-case, but it is my use-case :).
Specifically, there are 100+ instances of the same application running
on the same cluster with wildly different usage patterns. Some will be
idle because no-one is logged in, some will be quite busy. Although
there are only 2 listeners per database, that's still a lot of
listeners that can be behind. Though I agree that bca6e6435 will have
mitigated quite a lot (yes, max_connections is quite high). Another
mitigation would be to spread across more smaller database clusters,
which we need to do anyway.

That said, your approach is conceptually simpler which is also worth
something and it gets essentially all the same benefits for more
normal use cases. If the QUEUE_CLEANUP_DELAY were raised a bit then we
could do mitigation of the rest on the client side by having idle
databases send dummy notifies every now and then to trigger clean up
for their database. The flip-side is that slow backends will then have
further to catch up, thus holding the lock longer. It's not worth
making it configurable so we have to guess, but 16 is perhaps a good
compromise.

Have a nice day,
-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> On Mon, 16 Sep 2019 at 00:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> ... I also think we
>> can simplify the handling of other-database listeners by including
>> them in the set signaled by SignalBackends, but only if they're
>> several pages behind.  So that leads me to the attached patch;
>> what do you think?

> I think I like the idea of having SignalBackend do the waking up a
> slow backend but I'm not enthused by the "lets wake up (at once)
> everyone that is behind". That's one of the issues I was explicitly
> trying to solve. If there are any significant number of "slow"
> backends then we get the "thundering herd" again.

But do we care?  With asyncQueueAdvanceTail gone from the listeners,
there's no longer an exclusive lock for them to contend on.  And,
again, I failed to see any significant contention even in HEAD as it
stands; so I'm unconvinced that you're solving a live problem.

            regards, tom lane



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Martijn van Oosterhout
Дата:
Hoi Tom,

On Mon, 16 Sep 2019 at 15:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Martijn van Oosterhout <kleptog@gmail.com> writes:
> > I think I like the idea of having SignalBackend do the waking up a
> > slow backend but I'm not enthused by the "lets wake up (at once)
> > everyone that is behind". That's one of the issues I was explicitly
> > trying to solve. If there are any significant number of "slow"
> > backends then we get the "thundering herd" again.
>
> But do we care?  With asyncQueueAdvanceTail gone from the listeners,
> there's no longer an exclusive lock for them to contend on.  And,
> again, I failed to see any significant contention even in HEAD as it
> stands; so I'm unconvinced that you're solving a live problem.

You're right, they only acquire a shared lock which is much less of a
problem. And I forgot that we're still reducing the load from a few
hundred signals and exclusive locks per NOTIFY to perhaps a dozen
shared locks every thousand messages. You'd be hard pressed to
demonstrate there's a real problem here.

So I think your patch is fine as is.

Looking at the release cycle it looks like the earliest either of
these patches will appear in a release is PG13, right?

Thanks again.
-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/



Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

От
Tom Lane
Дата:
Martijn van Oosterhout <kleptog@gmail.com> writes:
> On Mon, 16 Sep 2019 at 15:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> But do we care?  With asyncQueueAdvanceTail gone from the listeners,
>> there's no longer an exclusive lock for them to contend on.  And,
>> again, I failed to see any significant contention even in HEAD as it
>> stands; so I'm unconvinced that you're solving a live problem.

> You're right, they only acquire a shared lock which is much less of a
> problem. And I forgot that we're still reducing the load from a few
> hundred signals and exclusive locks per NOTIFY to perhaps a dozen
> shared locks every thousand messages. You'd be hard pressed to
> demonstrate there's a real problem here.

> So I think your patch is fine as is.

OK, pushed.

> Looking at the release cycle it looks like the earliest either of
> these patches will appear in a release is PG13, right?

Right.

            regards, tom lane