Bruce Momjian wrote:
>
> > > If I have 40 tables and each table is made up of 6-7 files including
> > > index's etc then that
> > > means that per process I could be opening up to 200-240 !!
> > >
> > > This means that with 64 db connections I could be hitting 12800-15360 open
> > > files
> > > on my system!!! What is the current Linux limit without kernel re-compile?
> > > What is the Linux
> > > limit with kernel re-compile?
> > >
> > > Why can't I just tell postgres to close thos files say 2 minutes after he
> > > is done with them
> > > and they have been idle?
> >
> > Take a look at /pg/backend/storage/file/fd.c::pg_nofile(). If you
> > change the line:
>
> This actually brings up a good point. We currently cache all
> descriptors up to the limit the OS will allow for a process.
>
> Is this too aggressive? Should we limit it to 50% of the maximum?
Seems difficulty to guess correctly on one setting for all. How
difficult would it be to make the limit configurable so that folks
could set it as a hard limit (eg. 100 open files per backend or 1000
per server) or a percentage of OS max? SET would seem most convenient
from a user point of view, but even something for configure would be
very useful in managing resources.
Cheers,
Ed Loehr