On Fri, Dec 20, 2019 at 02:35:37PM +0300, Alexey Kondratov wrote:
> On 19.12.2019 20:52, Robert Haas wrote:
> > On Thu, Dec 19, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > Bruce Momjian <bruce@momjian.us> writes:
> > > > Good question. I am in favor of allowing a larger value if no one
> > > > objects. I don't think adding the min/max is helpful.
> > >
> > > The original poster.
>
>
> And probably anyone else, who debugs stuck queries of yet another crazy ORM.
> Yes, one could use log_min_duration_statement, but having a possibility to
> directly get it from pg_stat_activity without eyeballing the logs is nice.
> Also, IIRC log_min_duration_statement applies only to completed statements.
Yes, you would need log_statement = true.
> > I think there are pretty obvious performance and memory-consumption
> > penalties to very large track_activity_query_size values. Who exactly
> > are we really helping if we let them set it to huge values?
> >
> > (wanders away wondering if we have suitable integer-overflow checks
> > in relevant code paths...)
>
>
> The value of pgstat_track_activity_query_size is in bytes, so setting it to
> any value below INT_MAX seems to be safe from that perspective. However,
> being multiplied by NumBackendStatSlots its reasonable value should be far
> below INT_MAX (~2 GB).
>
> Sincerely, It does not look for me like something badly needed, but still.
> We already have hundreds of GUCs and it is easy for a user to build a
> sub-optimal configuration, so does this overprotection make sense?
I can imagine using larger pgstat_track_activity_query_size values for
data warehouse queries, where they are long and there are only a few of
them.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +