Thanks all - we did have our max_connections set very very high (100k) as we upgraded this db in a load test
environment,and did not want to reboot the db after hitting the lower, saner, production limits during load testing.
I am surprised this allocation was taking place for unused connections - I’ve also verified that lowering
max_connectionsto 10K on this instance fixed the issue for us. Much appreciated!
Best,
James
> On May 7, 2019, at 11:14 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Alvaro Herrera <alvherre@2ndquadrant.com> writes:
>> Hmm, but 102400 is only 100kB, nowhere near the 1GB-1 limit, so there's
>> something odd going on there.
>
> I can reproduce the described behavior by also setting max_connections
> to something around 16K.
>
> Now, it seems pretty silly to me to be burning in excess of 1GB of shmem
> just for the current-query strings, and then that much again in every
> backend that reads pg_stat_activity. But should we be telling people they
> can't do it? I'm working on a patch to use MemoryContextAllocHuge for
> the "localactivity" buffer in pgstat_read_current_status. It might seem
> dumb now, but perhaps in ten years it'll be common.
>
> regards, tom lane