Max files per process..

Поиск
Список
Период
Сортировка
От Eamonn Kent
Тема Max files per process..
Дата
Msg-id 9146E3EBBFBCC94D95F95A1C4065348A01282E11@exch01.xsigo.com
обсуждение исходный текст
Ответы Re: Max files per process..  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-admin

Hi,

 

We are using Postgres 8.1.4 for an embedded linux device.  I have max_files_per_process unset – so it should take on the default value of 1000.  However, it appears that at times, postmaster has 1023 files open (lsof).

 

My understanding is that if postgres exceeds this limit, it will result in a warning message – not an error.  That is, postgres will close and re-open files if needed.

 

Questions:

           

-          Why is it exceeding this limit – is this a “soft limit”?

-          Is so, is there a way to set a hard limit? (or a rule of thumb – such as if you want postgres to never exceed 1024, set to 900?)

-          If at the limit, when it tries to syslog or perform some other operation, and (or other activity), and that operation needs to get a new fd – then presumably it could fail.  When at or near the limit can we be sure that no spurios allocation occurs?

 

 

 

According to the postgres documentation:

 

max_files_per_process (integer)

 

    Sets the maximum number of simultaneously open files allowed to each server subprocess. The default is 1000. If the kernel is enforcing a safe per-process limit, you don't need to worry about this setting. But on some platforms (notably, most BSD systems), the kernel will allow individual processes to open many more files than the system can really support when a large number of processes all try to open that many files. If you find yourself seeing "Too many open files" failures, try reducing this setting. This option can only be set at server start.

 

В списке pgsql-admin по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: log_duration?
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Max files per process..