> "Mark Alliban" <MarkA@idnltd.com> writes:
> > I am having problems with the number of open files on Redhat 6.1. The
value
> > of /proc/sys/fs/file-max is 4096 (the default), but this value is
reached
> > with about 50 ODBC connections. Increasing the file-max value would only
> > temporarily improve matters because on the long-term I expect to have
500+
> > active connections. How comes there are so many open files per
connection?
> > Is there any way to decrease the number of open files, so that I don't
have
> > to increase file-max to immense proportions?
>
> You can hack the routine pg_nofile() in src/backend/storage/file/fd.c
> to return some smaller number than it's returning now, but I really
> wouldn't advise reducing it below thirty or so. You'll still need to
> increase file-max.
>
> regards, tom lane
I have increased file-max to 16000. However after about 24 hours of running,
pgsql crashed and errors in the log showed that the system had run out of
memory. I do not have the exact error message, as I was in a hurry to get
the system up and running again (it is a live production system). The system
has 512MB memory and there were 47 ODBC sessions in progress, so I cannot
believe that the system *really* ran out of memory. I start postmaster
with -B 2048 -N 500, if that is relevant.
Also backends seem to hang around for about a minute after I close the ODBC
connections. Is this normal?
Thanks,
Mark.