Re: 100 simultaneous connections, critical limit?

Поиск
Список
Период
Сортировка
От Andrew McMillan
Тема Re: 100 simultaneous connections, critical limit?
Дата
Msg-id 1074135427.3198.213.camel@kant.mcmillan.net.nz
обсуждение исходный текст
Ответ на 100 simultaneous connections, critical limit?  (Jón Ragnarsson <jonr@physicallink.com>)
Список pgsql-performance
On Thu, 2004-01-15 at 01:48, Jón Ragnarsson wrote:
> I am writing a website that will probably have some traffic.
> Right now I wrap every .php page in pg_connect() and pg_close().
> Then I read somewhere that Postgres only supports 100 simultaneous
> connections (default). Is that a limitation? Should I use some other
> method when writing code for high-traffic website?

Whether the overhead of pg_connect() pg_close() has a noticeable effect
on your application depends on what you do in between them.  TBH I never
do that second one myself - PHP will close the connection when the page
is finished.

I have developed some applications which are trying to be
as-fast-as-possible and for which I either use pg_pconnect so you have
one DB connection per Apache process, or I use DBBalancer where you have
a pool of connections, and pg_connect is _actually_ connecting to
DBBalancer in a very low-overhead manner and you have a pool of
connections out the back.  I am the Debian package maintainer for
DBBalancer.

You may also want to consider differentiating based on whether the
application is writing to the database or not.  Pooling and persistent
connections can give weird side-effects if transaction scoping is
bollixed in the application - a second page view re-using an earlier
connection which was serving a different page could find itself in the
middle of an unexpected transaction.  Temp tables are one thing that can
bite you here.

There are a few database pooling solutions out there. Using pg_pconnect
is the simplest of these, DBBalancer fixes some of it's issues, and
others go further still.

Another point to consider is that database pooling will give you the
biggest performance increase if your queries are all returning small
datasets.  If you return large datasets it can potentially make things
worse (depending on implementation) through double-handling of the data.

As others have said too: 100 is just a configuration setting in
postgresql.conf - not an implemented limit.

Cheers,
                    Andrew McMillan.
-------------------------------------------------------------------------
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053,  Manners St,  Wellington
WEB: http://catalyst.net.nz/             PHYS: Level 2, 150-154 Willis St
DDI: +64(4)916-7201       MOB: +64(21)635-694      OFFICE: +64(4)499-2267
              How many things I can do without! - Socrates
-------------------------------------------------------------------------

В списке pgsql-performance по дате отправления:

Предыдущее
От: Christopher Kings-Lynne
Дата:
Сообщение: Re: 100 simultaneous connections, critical limit?
Следующее
От: Syd
Дата:
Сообщение: insert speed - Mac OSX vs Redhat