On Wed, Aug 11, 2021 at 10:16:13AM -0400, Alvaro Herrera wrote:
> 1. this depends on reading the stats file; that's done once per
> transaction. So if you run the query twice in a transaction, the second
> time will take less time. You can know how much time is spent reading
> that file by subtracting both times.
Yeah. I noticed. Looks like loading the stats file is the culprit. But
does that mean that the whole stats file has to be read at once? I just
need stats on db, not on relations?
> 2. EXPLAIN (VERBOSE) will tell you which functions are being called by
> the query. One of those loops across all live backends. Is that
> significant? You could measure by creating an identical view but
> omitting pg_stat_db_numbackends. Does it take the same time as the
> view? If not, then you know that looping around all live backends is
> slow.
Even `select datid, datname, xact_commit, xact_rollback from pg_stat_database;`
takes (now), a second. Second call in the same connection, different txn, 0.8s.
Second call in the same transaction as first - 0.053ms.
So it definitely suggests that loading the stats file is the problem.
> If the problem is (1) then you could have less tables, so that the file
> is smaller and thus faster to read, but I don't think you'll like that
> answer; and if the problem is (2) then you could reduce max_connections,
> but I don't think you'll like that either.
max_connections would be related, if I understand correctly, if I had
faster starts without pg_stat_db_numbackends. And I don't - takes
basically the same time.
> I suspect there's not much you can do, other than patch the monitoring
> system to not read that view as often.
Once every 15 seconds doesn't seem to be too often, but perhaps I can do
something about it...
Best regards,
depesz