On Tue, 2010-09-14 at 10:10 -0600, mark wrote:
> Hello,
>
> I am relatively new to postgres (just a few months) so apologies if
> any of you are bearing with me.
>
> I am trying to get a rough idea of the amount of bang for the buck I
> might see if I put in a connection pooling service into the enviroment
> vs our current methodology of using persistent open connections.
Well what a pooler does is provide persisten open connections that can
be reused. What tech are you using for these persisten open connections?
> Most of the connections from the various apps hold idle connections
> until they need to execute a query once done go back to holding an
> open idle connection. (there are ~600 open connections at any given
> time, and most of the time most are idle)
Sounds like each app is holding its own pool?
> I think from reading this list for a few weeks the answer is move to
> using connection pooling package elsewhere to better manage incoming
> connections, with a lower number to the db.
Correct, because each connection is overhead. If you have 600
connections, of which really only 20 are currently executing, that is
highly inefficient.
A pooler would have say, 40 connections open, with 20 currently
executing and a max pool of 600.
>
> I am told this will require some re-working of some app code as I
> understand pg-pool was tried a while back in our QA environment and
> server parts of various in-house apps/scripts/..etc started to
> experience show stopping problems.
Use pgbouncer. It is what Skype uses.
Sincerely,
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt