On Aug 19, 2013, at 9:55 AM, Dzmitry <dzmitry.nikitsin@gmail.com> wrote:
> No, I am not using pgbouncer, I am using pgpool.
>
> Total - I have 440 connections to postgres(I have rails application
> running on some servers - each application setup 60 connections to DB and
> keep if forever(until will not be killed), also I have some machines that
> do background processing, that keep connections too).
>
> Part that do a lot of writes(that update jobs from xml feed every night) -
> have 40 threads and keep 40 connections.
That's extreme, and probably counter-productive.
How many cores do you have on those rails servers? Probably not 64, right? Not 32? 16? 12? 8, even? Assuming <64, what
advantagedo you expect from 60 connections? Same comment applies to the 40 connections doing the update jobs--more
connectionsthan cores is unlikely to be helping anything, and more connections than 2x cores is almost guaranteed to be
worsethan fewer.
Postgres connections are of the heavy-weight variety: process per connection, not thread per connection, not thread-per
coreevent-driven. In particular, I'd worry about work_mem in your configuration. You've either got to set it really low
andlive with queries going to disk too quickly for sorts and so on, or have it a decent size and have the risk that too
manyqueries at once will trigger OOM.
Given your configuration, I wouldn't even start with pgbouncer for connection pooling. I'd first just slash the number
ofconnections everywhere by 1/2, or even 1/4 and see what effect that had. Then as a second step I'd look at where
connectionpooling might be used effectively.
--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice