Re: [PATCH] Improve performance of NOTIFY over many databases (issueblocking on AccessExclusiveLock on object 0 of class 1262 of database 0)

Поиск
Список
Период
Сортировка
От Martijn van Oosterhout
Тема Re: [PATCH] Improve performance of NOTIFY over many databases (issueblocking on AccessExclusiveLock on object 0 of class 1262 of database 0)
Дата
Msg-id CADWG95uLhar1uq6PQLoY1mTQYeN23c1dvOr2tVjcXUBZ1ge9XA@mail.gmail.com
обсуждение исходный текст
Ответ на [PATCH] Improve performance of NOTIFY over many databases (issueblocking on AccessExclusiveLock on object 0 of class 1262 of database 0)  (Martijn van Oosterhout <kleptog@gmail.com>)
Ответы Re: [PATCH] Improve performance of NOTIFY over many databases (issue blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Hoi hackers,

Please find attached updated versions of the patches, I've now tested
them. Also attached is a reproduction script to verify that they
actually work.

To test you need to create 150 databases as described in the script,
then simply execute it. Before patching you get the following results
(last figure is the CPU usage of Postgres):

1559749330 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 269.07%
1559749335 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 268.07%
1559749340 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 270.94%

After patching you get the following:

1559749840 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.02, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 5.09%
1559749845 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 5.06%
1559749850 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:
0.01 [0.01/0.01/0.01/0.01/0.01], 4.75%

The async queue functions in postgres also no longer appear in the
perf output (below measuring threshold).

As for general method, it seems like the actual optimisation here is
that the async queue tail pointer is only updated once per SLRU page
instead of every message. This would require a significantly larger
patch, but shouldn't be too difficult. Thoughts?

Have a nice day,
Martijn

On Tue, 4 Jun 2019 at 09:08, Martijn van Oosterhout <kleptog@gmail.com> wrote:
>
> Hoi hackers,
>
> We've been having issues with NOTIFYs blocking over multiple databases
> (see [1] for more details). That was 9.4 but we've updated the
> database to 11.3 and still have the same issue. Now however we could
> use perf to do profiling and got the following profile (useless
> details elided):
>
> --32.83%--ProcessClientReadInterrupt
>    --32.68%--ProcessNotifyInterrupt
>       --32.16%--asyncQueueReadAllNotifications
>          --23.37%--asyncQueueAdvanceTail
>             --20.49%--LWLockAcquire
>                --18.93%--LWLockQueueSelf
>                   --12.99%--LWLockWaitListLock
>
> (from: perf record -F 99 -ag -- sleep 600)
>
> That shows that more than 20% of the time is spent in that single
> function, waiting for an exclusive lock on the AsyncQueueLock. This
> will block any concurrent session doing a NOTIFY in any database on
> the system. This would certainly explain the symptoms we're seeing
> (process xxx still waiting for AccessExclusiveLock on object 0 of
> class 1262 of database 0).
>
> Analysis of the code leads me to the following hypothesis (and hence
> to the attached patches):
>
> We have ~150 databases, each of which has 2 active backends with an
> active LISTEN. When a NOTIFY happens anywhere on any database it
> (under an exclusive lock) makes a list of 300 backends to send a
> signal to. It then wakes up all of those backends. Each backend then
> examines the message and all but one discards it as being for the
> wrong database. Each backend then calls asyncQueueAdvanceTail (because
> the current position of the each backend was the tail) which then
> takes an exclusive lock and checks all the other backends to see if
> the tail can be advanced. All of these will conclude 'no', except the
> very last one which concludes the tail can be advanced by about 50
> bytes or so.
>
> So the inner loop of asyncQueueAdvanceTail will, while holding a
> global exclusive lock, execute 2*150*4000 (max backends) = 1.2 million
> times for basically no benefit. During this time, no other transaction
> anywhere in the system that does a NOTIFY will be able to commit.
>
> The attached patches attempt reduce the overhead in two ways:
>
> Patch 1: Changes asyncQueueAdvanceTail to do nothing unless the
> QUEUE_HEAD is on a different page than the QUEUE_TAIL. The idea is
> that there's no point trying to advance the tail unless we can
> actually usefully truncate the SLRU. This does however mean that
> asyncQueueReadAllNotifications always has to call
> asyncQueueAdvanceTail since it can no longer be guaranteed that any
> backend is still at the tail, which is one of the assumptions of the
> current code. Not sure if this is a problem or if it can be improved
> without tracking much more state.
>
> Patch 2: Changes SignalBackends to only notify other backends when (a)
> they're the same database as me or (b) the notify queue has advanced
> to a new SLRU page. This avoids backends being woken up for messages
> which they are not interested in.
>
> As a consequence of these changes, we can reduce the number of
> exclusive locks and backend wake ups in our case by a factor of 300.
> You still however get a thundering herd at the end of each SLRU page.
>
> Note: these patches have not yet been extensively tested, and so
> should only be used as basis for discussion.
>
> Comments? Suggestions?
>
> [1] https://www.postgresql.org/message-id/CADWG95t0j9zF0uwdcMH81KMnDsiTAVHxmBvgYqrRJcD-iLwQhw@mail.gmail.com
>
> --
> Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/



-- 
Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Binary support for pgoutput plugin
Следующее
От: Rafia Sabih
Дата:
Сообщение: Re: [PATCH] Incremental sort (was: PoC: Partial sort)