> "Mark Alliban" <MarkA@idnltd.com> writes:
> > I have increased file-max to 16000. However after about 24 hours of
running,
> > pgsql crashed and errors in the log showed that the system had run out
of
> > memory. I do not have the exact error message, as I was in a hurry to
get
> > the system up and running again (it is a live production system). The
system
> > has 512MB memory and there were 47 ODBC sessions in progress, so I
cannot
> > believe that the system *really* ran out of memory.
>
> Oh, I could believe that, depending on what your ODBC clients were
> doing. 10 meg of working store per backend is not out of line for
> complex queries. Have you tried watching with 'top' to see what a
> typical backend process size actually is for your workload?
>
> Also, the amount of RAM isn't necessarily the limiting factor here;
> what you should have told us is how much swap space you have ...
530MB of swap. top reports that the backends use around 17-19MB on average.
Are you saying then, that if I have 500 concurrent queries, I will need 8GB
of swap space? Is there any way to limit the amount of memory a backend can
use, and if there is, would it be a very bad idea to do it?
Thanks,
Mark.