Re: Millisecond-precision connect_timeout for libpq
| От | Josh Berkus | 
|---|---|
| Тема | Re: Millisecond-precision connect_timeout for libpq | 
| Дата | |
| Msg-id | 51D72632.2090604@agliodbs.com обсуждение исходный текст  | 
		
| Ответ на | Millisecond-precision connect_timeout for libpq (ivan babrou <ibobrik@gmail.com>) | 
| Ответы | 
                	
            		Re: Millisecond-precision connect_timeout for libpq
            		
            		 Re: Millisecond-precision connect_timeout for libpq  | 
		
| Список | pgsql-hackers | 
On 07/05/2013 12:26 PM, Tom Lane wrote: > ivan babrou <ibobrik@gmail.com> writes: >> If you can figure out that postgresql is overloaded then you may >> decide what to do faster. In our app we have very strict limit for >> connect time to mysql, redis and other services, but postgresql has >> minimum of 2 seconds. When processing time for request is under 100ms >> on average sub-second timeouts matter. > > If you are issuing a fresh connection for each sub-100ms query, you're > doing it wrong anyway ... It's fairly common with certain kinds of apps, including Rails and PHP.This is one of the reasons why we've discussed havinga kind of stripped-down version of pgbouncer built into Postgres as a connection manager. If it weren't valuable to be able to relocate pgbouncer to other hosts, I'd still say that was a good idea. Ivan would really strongly benefit from running pgbouncer on his appservers instead of connecting directly to Postgres. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
В списке pgsql-hackers по дате отправления: