Øystein Kolsrud schrieb am 17.09.2020 um 11:19:
> All settings typically referred to (like "work_mem" and
> "temp_buffers") are at default settings, and I have a very hard time
> seeing how those settings could add up to memory in the magnitude of
> 200MB. The "max_connections" settings is quite high (>1000), but
> there are nowhere near that many connections active in the system.
> The only way I have been able to trigger growth of processes in my
> tests is to extract large amounts of data, but when I do that, it is
> always the "shared working set" that grows. "private working set"
> remains stable at around 5 MB.
It's not clear to me how many open (vs. "active") connections you have.
An "active" connection is one that is currently _running_ a statement,
an "open" is one that is "idle", i.e. not doing anything.
To my knowledge, a single backend process (=connection) will not release
the memory it once allocated. The only way to free that memory is to
physically close the connection.
So if all your > 1000 connections are currently idle/open (but not "active"),
they will still consume the memory they have allocated _when_ they where
active.
With that high number of connections it isn't really surprising that
you run out of memory at one point or another then.