Yes, this helps a lot. Obviously this is far more complicated than just a simple pg_pconnect to accomplish a speedy reply. I really thought that a persistent connection was supposed to eliminate the overhead time in a connection (well, other than the first time through). But even if I am the only person on the machine it still takes forever to get a response every time I use http. I wondered if I was supposed to have a php program that had its own pconnect and every http call to the PostgreSQL database went to that php program rather than handling it by itself, but I found no indication of that while RTFM. I will give this a try and see if I can get the speed to anything reasonable. Thanks for the quick reply!
Jeff
----- Original Message -----
Sent: Tuesday, March 11, 2003 9:30 AM
Subject: Re: [PERFORM] Large difference between elapsed time and run time
Hi,
-----Original Message-----
From: Jeffrey D. Brower [mailto:jeff@pointhere.net]
Sent: 11 March 2003 14:24
To: Nikk Anderson; 'Scott Buchan'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Large difference between elapsed time and run time
>My question is how are you accomplishing the connection pooling?
I have programmed a connection pool in Java - I am sure something similar is possible in most other languages.
Very basically, the concept is as follows:
> Application initialisation
>>> 1) Create X number of connections to the database
>>> 2) Store connections in an Object
>>> 3) Create an array of free and busy connections - put all new connections in free connection array
>>> 4) Object is visible to all components of web application
> Request for a connection
>>> 4) Code asks for a connection from the pool object (3).
>>> 5) Pool object moves connection from free array, to the busy array.
>>> 5) Connection is used to do queries
>>> 6) Connection is sent back to pool object (3).
>>> 7) Pool object moves the connection from the busy array, back to the free array
I hope that helps!
Nikk