On Thu, Jun 9, 2016 at 8:46 AM, hubert depesz lubaczewski
<depesz@depesz.com> wrote:
> Hi,
> so, we are running 9.3.10 in production, but I tested it in 9.6, and the
> problem seems to be there too in 9.6, though to much lesser extent.
>
> In our database we have ~ 70000 tables (~ 180 tables in ~ 400 schemas).
>
> So far we used application in such a way that each connection could use
> only tables from single schema.
>
> But then, we switched situation to where single connection (very long,
> as it's reused thanks to pgbouncer) can effectively query tables from
> all schemas.
>
> And this caused memory usage to explode, to the point that our server
> wasn't able to handle it (64gb of mem gone).
You should probably use pgbouncer's server_lifetime to force
connections to be discarded and recreated every now and then. That
parameter seems to exist specifically for dealing with this kind of
problem.
....
> This effectively does:
> select 1, 1, * from <table> limit <1..3>
> for each table.
Is all of that necessary? Can't you reproduce the problem just as
well with just "select count(*) from <table>;" ?
> on 9.3 after ~ 35000 tables, pg process grew by 1GB, and it wasn't shared
> buffers, as smaps showed that the memory was anonymous.
>
> In 9.6, after 35000 tables it grew by ~ 71MB. Which is much better, but still not really optimal.
>
> The same situation happens when I was *not* using prepared statements on server side.
>
> Basically it looks that postgresql "caches" query plans? parsed elements?
It is caching metadata for every table and index touched by the backend.
Cheers,
Jeff