On Mon, Nov 14, 2011 at 4:59 PM, Ben Chobot <bench@silentmedia.com> wrote:
> On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:
>
>> We have anywhere from 60-80 background worker processes connecting to
>> Postgres, performing a short task and then disconnecting. The lifetime
>> of these tasks averages 1-3 seconds.
>
> [snip]
>
>> Is this something that I should look into or is it not much of an
>> issue? Whats the best way to determine if I could benefit from using a
>> connection pool?
>
> Yes, this is precisely a kind of situation a connection pooler will help with. Not only with the the connection set
up/teardown overhead, but also by using resources on your server better.... you probably don't actually have 60-80
coreson your server, so reducing that number down to just a few that are actually working will the Postgres finish them
fasterto work on others. Basically, the queueing happens off the postgres server, letting postgres use the box with
lessinterruptions.
>
> Now, is it a problem to not use a pooler? That depends on if it's causing you grief or not. But if you think you'll
getmore connection churn or larger numbers of workers, then a connection pooler will only help more.
Thanks for your input. Its not causing me grief per se. The load on
the pg machine is small. I guess I am just wondering if I am being
stupid and leaving resources and/or performance on the table.
But it sounds that as a whole it would be a good use case for
pgbouncer and in the long run will prove beneficial. But no, its not
obviously killing me right now.