Обсуждение: Performance impact of lowering max_files_per_process

Поиск
Список
Период
Сортировка

Performance impact of lowering max_files_per_process

От
Thomas Kellerer
Дата:
We have a customer project where Postgres is using too many file handles during peak times (around 150.000)

Apart from re-configuring the operating system (CentOS) this could also be mitigated by lowering
max_files_per_process.

I wonder what performance implications that has on a server with around 50-100 active connections (through pgBouncer).

One of the reasons (we think) that Postgres needs that many file handles is the fact that the schema is quite large (in
termsof tables and indexes) and the sessions are touching many tables during their lifetime.
 

My understanding of the documentation is, that Postgres will work just fine if we lower the limit, it simply releases
thecached file handles if the limit is reached. But I have no idea how expensive opening a file handle is in Linux.
 

So assuming the sessions (and thus the queries) actually do need that many file handles, what kind of performance
impact(if any) is to be expected by lowering that value for Postgres to e.g. 500?
 

Regards
Thomas
  


Re: Performance impact of lowering max_files_per_process

От
Thomas Kellerer
Дата:
Thomas Kellerer schrieb am 19.01.2018 um 17:48:
>
> I wonder what performance implications that has on a server with
> around 50-100 active connections (through pgBouncer).
> 
> My understanding of the documentation is, that Postgres will work
> just fine if we lower the limit, it simply releases the cached file
> handles if the limit is reached. But I have no idea how expensive
> opening a file handle is in Linux.
> 
> So assuming the sessions (and thus the queries) actually do need that
> many file handles, what kind of performance impact (if any) is to be
> expected by lowering that value for Postgres to e.g. 500?

I would be really interested in an answer.