> Even after extending the number of file descriptors on the kernel quite
> significantly I still get the occasional crash due to too many open files. I
> would say that the current ploicy is too aggressive uder heavy loads.
>> This actually brings up a good point. We currently cache all
>> descriptors up to the limit the OS will allow for a process.
>>
>> Is this too aggressive? Should we limit it to 50% of the maximum?
We could limit the number of open files/per backend by using limit, or
ulimit etc. if all file accesses would go through Vfd. Is there any
reason to use open() directly, for example, in mdblindwrt()?
Also, I have noticed that some files such as pg_internal.init are not
necessary kept open and should be closed after we finish to use it to
save a fd.
--
Tatsuo Ishii