Обсуждение: pg_stat_statements

Поиск
Список
Период
Сортировка

pg_stat_statements

От
"Godfrin, Philippe E"
Дата:

Hi,

On Mon, May 02, 2022 at 12:45:28PM +0000, Godfrin, Philippe E wrote:

>> Greetings,

>>

>> I want to limit the query text that gets captured in pg_stat_statements. We

>> have sql statements with thousands of values clauses (upwards of 10,000) that

>> run at a 1 second interval. When just a handful are running plus 2 or 3 loads

>> using the same technique (10,000 entry values clauses) querying the

>> pg_stat_statements table gets bogged down (see below). With the

>> pg_stat_statements.max is set to 1000 statements just querying the table

>> stats table seems to impact the running statements! I have temporarily staved

>> off the issue by reducing the max to 250 statements, and I have made

>> recommendations to the development team to cut down the number of values

>> clauses. However, it seems to me that the ability to truncate the captured

>> query would be a useful feature.

 

>The store queries are normalized so the values themselves won't be stored, only

>a "?" per value. And as long as all the queries have the same number of values

>there should be a single entry stored for the same role and database, so all in

>all it should limit the size of the stored query texts.

>On the other hand, with such a low pg_stat_statements.max, you may have a lot

>of entry evictions, which tends to bloat the external query file

>($PGDATA/pg_stat_tmp/pgss_query_texts.stat). Did you check how big it is and

>if yes how fast it grows? I've once seen the file being more than 1GB without

>any reason why, which was obviously slowing everything down. A simple call to

>pg_stat_statements_reset() fixed the problem, at least as far as I know as I

>never had access to the server and never had any news after that.

 

I wasn’t exactly clear about the queries. The values clauses themselves are not long –

We are using repeated values clauses:

 

INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)

VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),

($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),

($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),

($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),

($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)

 

This one’s not long, but some ‘load statements’ have 10,000 values clauses, others add up to 10,000 more

in an ON CONFLICT clause. I’ve checked the external Query file and it’s currently not large

at all. But I will keep an eye on that. When I had The settings at 1000 statements

the file was indeed over 1GB. For the record, development is reducing those statement

lengths.

 

>> I've peeked at the source code and I don't see the track_activity_query_size

>> used (pg_stat_activity.query) which would be one mechanism. I don't really

>> know what would be the right way to do this or even if it is a good idea,

>> i.e. if limiting that would have a larger impact to the statistics

>> ecosystem...

 

>pg_stat_statements used to truncate the query text to

>track_activity_query_size, but that limitation was removed when the query texts

>were moved to the external query file. It's quite convenient to have the full

>normalized query text available, especially with the application is using some

>ORM, as they tend to make SQL even more verbose than it already is. Having a

>very high number of values (and I'm assuming queries with different number of

>values) seems like a corner case, but truncating the query

>text would only fix part of the problem. It will lead to a very high number of

>different queryid, which is also problematic as frequent entry evictions also

>tends to have an insanely high overhead, and you can't and an infinite number

>of entries stored.

 

>> Thoughts or suggestions?

 

>You didn't explain how you're using pg_stat_statements. Do you really need to

>query pg_stat_statements with the query text each time? If you only need to

>get some performance metrics you could adapt your system to only retrieve the

>query text for the wanted queryid(s) once you find some problematic pattern,

>and/or cache the query texts a table or some other place. But with a very low

>pg_stat_statements.max (especially if you can have a varying number of values

>from 1 to 10k) it might be hard to do.

 

The first observation is how long a simple query took:

 

# select count(*) from pg_stat_statements;

count

-------

   971

Time: 6457.985 ms (00:06.458)

 

MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:

# select count(*) from pg_stat_statements(showtext:=false);

count

-------

   970

Time: 10.644 ms

 

Only 10ms…

 

Second, we have Datadog installed. Datadoq queries the pg_stat_statements table

every 10 seconds. The real pain point is querying the pg_stat_statements seems

to have an impact on running queries, specifically inserts in my case.

 

I believe this is an actual impact that needs a solution.

 

 

My apologies, for some reason these mails are not making it to me.

 

Phil Godfrin | Database Administration

NOV

NOV US | Engineering Data

9720 Beechnut St | Houston, Texas 77036

M  281.825.2311

E   Philippe.Godfrin@nov.com

 

Re: pg_stat_statements

От
Julien Rouhaud
Дата:
Hi,

On Tue, May 03, 2022 at 01:30:32PM +0000, Godfrin, Philippe E wrote:
>
> I wasn't exactly clear about the queries. The values clauses themselves are not long -
> We are using repeated values clauses:
>
> INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)
> VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),
> ($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),
> ($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),
> ($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),
> ($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)
>
> This one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more
> in an ON CONFLICT clause. I've checked the external Query file and it's currently not large
> at all. But I will keep an eye on that. When I had The settings at 1000 statements
> the file was indeed over 1GB. For the record, development is reducing those statement
> lengths.
> [...]
> The first observation is how long a simple query took:
>
> # select count(*) from pg_stat_statements;
> count
> -------
>    971
> Time: 6457.985 ms (00:06.458)
>
> MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:
> # select count(*) from pg_stat_statements(showtext:=false);
> count
> -------
>    970
> Time: 10.644 ms
>
> Only 10ms...

Well, 10ms is still quite slow.

You're not removing the long queries texts, you're removing all queries texts.
I don't know if the overhead comes from processing at least some long
statements or is mostly due to having to retrieve the query file.  Do you get
the same times if you run the query twice?  Maybe you're short on RAM and have
somewhat slow disks, and the text file has to be read from disk rather than OS
cache?

Also I don't know what you mean by "not large at all", so it's hard to compare
or try to reproduce.  FWIW on some instance I have around, I have a 140kB file
and querying pg_stat_statements *with* the query text file only takes a few ms.

You could try to query that view with some unprivileged user.  This way you
will still retrieve the query text file but will only emit "<insufficient
privilege>" rather than processing the query texts, this may narrow down the
problem.  Or better, if you could run perf [1] to see where the overhead really
is.

> Second, we have Datadog installed. Datadoq queries the pg_stat_statements table
> every 10 seconds. The real pain point is querying the pg_stat_statements seems
> to have an impact on running queries, specifically inserts in my case.

I think this is a side effect of having a very low pg_stat_statements.max, if
of course you have more queries than the current value.

If the extra time is due to loading the query text file and if it's loaded
after acquiring the lightweight lock, then you will prevent evicting or
creating new entries for a long time, which means that the query execution for
those queries will be blocked until the query on pg_stat_statements ends.

There are unfortunately *a lot* of unknowns here, so I can't do anything apart
from guessing.

> I believe this is an actual impact that needs a solution.

First, if you have an OLTP workload you have to make sure that
pg_stat_statements.max is high enough so that you don't have to evict entries,
or at least not often.  Then, I think that querying pg_stat_statements every
10s is *really* aggressive, that's always going to have some noticeable
overhead.  For the rest, we need more information to understand where the
slowdown is coming from.

[1] https://wiki.postgresql.org/wiki/Profiling_with_perf



RE: [EXTERNAL] Re: pg_stat_statements

От
"Godfrin, Philippe E"
Дата:

 

 

From: Julien Rouhaud <rjuju123@gmail.com>
Sent: Wednesday, May 4, 2022 10:24 PM
To: Godfrin, Philippe E <Philippe.Godfrin@nov.com>
Cc: pgsql-hackers@lists.postgresql.org
Subject: [EXTERNAL] Re: pg_stat_statements

 

Hi,

On Tue, May 03, 2022 at 01:30:32PM +0000, Godfrin, Philippe E wrote:
>
> I wasn't exactly clear about the queries. The values clauses themselves are not long -
> We are using repeated values clauses:
>
> INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)
> VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),
> ($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),
> ($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),
> ($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),
> ($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)
>
> This one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more
> in an ON CONFLICT clause. I've checked the external Query file and it's currently not large
> at all. But I will keep an eye on that. When I had The settings at 1000 statements
> the file was indeed over 1GB. For the record, development is reducing those statement
> lengths.
> [...]
> The first observation is how long a simple query took:
>
> # select count(*) from pg_stat_statements;
> count
> -------
> 971
> Time: 6457.985 ms (00:06.458)
>
> MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:
> # select count(*) from pg_stat_statements(showtext:=false);
> count
> -------
> 970
> Time: 10.644 ms
>
> Only 10ms...

Well, 10ms is still quite slow.

You're not removing the long queries texts, you're removing all queries texts.
I don't know if the overhead comes from processing at least some long
statements or is mostly due to having to retrieve the query file. Do you get
the same times if you run the query twice? Maybe you're short on RAM and have
somewhat slow disks, and the text file has to be read from disk rather than OS
cache?

Also I don't know what you mean by "not large at all", so it's hard to compare
or try to reproduce. FWIW on some instance I have around, I have a 140kB file
and querying pg_stat_statements *with* the query text file only takes a few ms.

You could try to query that view with some unprivileged user. This way you
will still retrieve the query text file but will only emit "<insufficient
privilege>" rather than processing the query texts, this may narrow down the
problem. Or better, if you could run perf [1] to see where the overhead really
is.

> Second, we have Datadog installed. Datadoq queries the pg_stat_statements table
> every 10 seconds. The real pain point is querying the pg_stat_statements seems
> to have an impact on running queries, specifically inserts in my case.

I think this is a side effect of having a very low pg_stat_statements.max, if
of course you have more queries than the current value.

If the extra time is due to loading the query text file and if it's loaded
after acquiring the lightweight lock, then you will prevent evicting or
creating new entries for a long time, which means that the query execution for
those queries will be blocked until the query on pg_stat_statements ends.

There are unfortunately *a lot* of unknowns here, so I can't do anything apart
from guessing.

> I believe this is an actual impact that needs a solution.

First, if you have an OLTP workload you have to make sure that
pg_stat_statements.max is high enough so that you don't have to evict entries,
or at least not often. Then, I think that querying pg_stat_statements every
10s is *really* aggressive, that's always going to have some noticeable
overhead. For the rest, we need more information to understand where the
slowdown is coming from.

[1] https://wiki.postgresql.org/wiki/Profiling_with_perf

 

Hello Julien,

Thanks very much for looking closely at this. To answer your questions:

I misspoke the query file at the time of the queries above was around 1GB.

 

I don’t believe I am short on RAM, although I will re-examine that aspect. I’m running 32GB

with a 22GB shared pool, which seems OK to me. The disk are SSD (AWS EBS) and

the disk volumes are the same as the data volumes. If a regular file on disk at 1GB

took 6 seconds to read, the rest of the system would be in serious degradation.

 

The impact on running queries was observed when the max was set at 1000. I don’t

quite understand what you keep saying about evictions and other things relative to the

pgss file. Can you refer me to some detailed documentation or a good article which

describes the processes you’re alluding to?

 

Insofar as querying the stats table every 10 seconds, I think that’s not aggressive enough as

I want to have statement monitoring as close to realtime as possible.

 

You are indeed correct insofar as unknowns – the biggest one for me is I don’t know

enough about how the stats extension works – as I asked before, more detail on the

internals of the extension would be useful. Is my only choice in that regard to browse

the source code?

 

Short of running the profile that should deal with the unknowns, any other ideas?

Thanks,

philippe

 

Re: [EXTERNAL] Re: pg_stat_statements

От
Julien Rouhaud
Дата:
On Thu, May 05, 2022 at 12:21:41PM +0000, Godfrin, Philippe E wrote:
>
> Thanks very much for looking closely at this. To answer your questions:
> I misspoke the query file at the time of the queries above was around 1GB.

Ah, that's clearly big enough to lead to some slowdown.

> I don't believe I am short on RAM, although I will re-examine that aspect. I'm running 32GB
> with a 22GB shared pool, which seems OK to me. The disk are SSD (AWS EBS) and
> the disk volumes are the same as the data volumes. If a regular file on disk at 1GB
> took 6 seconds to read, the rest of the system would be in serious degradation.

I don't know what you mean by shared pool, and you also didn't give any kind of
information about your postgres usage, workload, number of connections or
anything so it's impossible to know.  Note that if your system is quite busy
you could definitely have some IO saturation, especially if that file is
discarded from OS cache, so I wouldn't blindly rule that possibility out.  I
suggested multiple ways to try to figure out if that's the problem though, so
having such answer would be better than guessing if IO or the "AWS EBS" (which
I also don't know anything about) is a problem or not.

> The impact on running queries was observed when the max was set at 1000. I don't
> quite understand what you keep saying about evictions and other things relative to the
> pgss file. Can you refer me to some detailed documentation or a good article which
> describes the processes you're alluding to?

I don't think there's any thorough documentation or article explaining how
pg_stat_statements works internally.  But you have a maximum number of
different (identified by userid, dbid, queryid) entries stored, so if your
workload leads to more entries than the max then pg_stat_statements will have
to evict the least used ones to store the new one, and that process is costly
and done using some exclusive lwlock.  You didn't say which version of postgres
you're using, but one thin you can do to see if you probably have eviction is
to check the number of rows in pg_stat_statements view.  If the number changes
very often and is always close to pg_stat_statements.max then you probably have
frequent evictions.

> Insofar as querying the stats table every 10 seconds, I think that's not aggressive enough as
> I want to have statement monitoring as close to realtime as possible.

What problem are you trying to solve?  Why aren't you using pg_stat_activity if
you want realtime overview of what is happening?

> You are indeed correct insofar as unknowns - the biggest one for me is I don't know
> enough about how the stats extension works - as I asked before, more detail on the
> internals of the extension would be useful. Is my only choice in that regard to browse
> the source code?

I think so.

> Short of running the profile that should deal with the unknowns, any other ideas?

Do you mean using perf, per the "profiling with perf" wiki article?  Other than
that I suggested other ways to try to narrow down the problem, what was the
outcome for those?