Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors
Дата
Msg-id 20170807205944.mlau7zhmsgt7oyzj@alap3.anarazel.de
обсуждение исходный текст
Ответ на Re: [HACKERS] max_files_per_processes vs others uses of file descriptors  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: [HACKERS] max_files_per_processes vs others uses of file descriptors  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On 2017-08-07 16:52:42 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > These days there's a number of other consumers of
> > fds. E.g. postgres_fdw, epoll, ...  All these aren't accounted for by
> > fd.c.
> 
> > Given how close max_files_per_process is to the default linux limit of
> > 1024 fds, I wonder if we shouldn't increase NUM_RESERVED_FDS by quite a
> > bit?
> 
> No, I don't think so.  If you're depending on the NUM_RESERVED_FDS
> headroom for anything meaningful, *you're doing it wrong*.  You should be
> getting an FD via fd.c, so that there is an opportunity to free up an FD
> (by closing a VFD) if you're up against system limits.  Relying on
> NUM_RESERVED_FDS headroom can only protect against EMFILE not ENFILE.

How would this work for libpq based stuff like postgres fdw? Or some
random PL doing something with files? There's very little headroom here.


> What this means is that the epoll stuff needs to be tied into fd.c more
> than it is now, but that's likely a good thing anyway; it would for
> example provide a more robust way of ensuring we don't leak epoll FDs at
> transaction abort.

Not arguing against that.


Greetings,

Andres Freund



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] max_files_per_processes vs others uses of file descriptors
Следующее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] max_files_per_processes vs others uses of file descriptors