Обсуждение: Many connections lingering

Поиск
Список
Период
Сортировка

Many connections lingering

От
Slavisa Garic
Дата:
Hi all,

I've just noticed an interesting behaviour with PGSQL. My software is
made up of few different modules that interact through PGSQL database.
Almost every query they do is an individual transaction and there is a
good reason for that. After every query done there is some processing
done by those modules and I didn't want to lock the database in a
single transaction while that processing is happening. Now, the
interesting behaviour is this. I've ran netstat on the machine where
my software is running and I searched for tcp connections to my PGSQL
server. What i found was hundreds of lines like this:

tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41631 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41119 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41311 TIME_WAIT
tcp        0      0 remus.dstc.monash.:8649 remus.dstc.monash:41369 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40479 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39454 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39133 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41501 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39132 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41308 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:40667 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41179 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39323 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41434 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:40282 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41050 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41177 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39001 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:41305 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:38937 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39128 TIME_WAIT
tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40600 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:41624 TIME_WAIT
tcp        0      0 remus.dstc.monash:43002 remus.dstc.monash:39000 TIME_WAIT

Now could someone explain to me what this really means and what effect
it might have on the machine (the same machine where I ran this
query)? Would there eventually be a shortage of available ports if
this kept growing? The reason I am asking this is because one of my
modules was raising exception saying that TCP connection could not be
establish to a server it needed to connect to. This may sound
confusing so I'll try to explain this.

We have this scenario, there is a PGSQL server (postmaster) which is
running on machine A. Then there is a custom server called DBServer
which is running on machine B. This server accepts connections from a
client called an Agent. Agent may ran on any machine out there and it
would connect back to DBServer asking for some information. The
communication between these two is in the form of SQL queries. When
agent sends a query to DBServer it passes that query to machine A
postmaster and then passes back the result of the query to that Agent.
The connection problem I mentioned in the paragraph above happens when
Agent tries to connect to DBServer.

So the only question I have here is would those lingering socket
connections above have any effect on the problem I am having. If not I
am sorry for bothering you all with this, if yes I would like to know
what I  could do to avoid that.

Any help would be appreciated,
Regards,
Slavisa

Re: [NOVICE] Many connections lingering

От
Tom Lane
Дата:
Slavisa Garic <sgaric@gmail.com> writes:
> ... Now, the
> interesting behaviour is this. I've ran netstat on the machine where
> my software is running and I searched for tcp connections to my PGSQL
> server. What i found was hundreds of lines like this:

> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT

This is a network-level issue: the TCP stack on your machine knows the
connection has been closed, but it hasn't seen an acknowledgement of
that fact from the other machine, and so it's remembering the connection
number so that it can definitively say "that connection is closed" if
the other machine asks.  I'd guess that either you have a flaky network
or there's something bogus about the TCP stack on the client machine.
An occasional dropped FIN packet is no surprise, but hundreds of 'em
are suspicious.

> Now could someone explain to me what this really means and what effect
> it might have on the machine (the same machine where I ran this
> query)? Would there eventually be a shortage of available ports if
> this kept growing? The reason I am asking this is because one of my
> modules was raising exception saying that TCP connection could not be
> establish to a server it needed to connect to.

That kinda sounds like "flaky network" to me, but I could be wrong.
In any case, you'd have better luck asking kernel or network hackers
about this than database weenies ;-)

            regards, tom lane

Re: [NOVICE] Many connections lingering

От
Greg Stark
Дата:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Slavisa Garic <sgaric@gmail.com> writes:
> > ... Now, the
> > interesting behaviour is this. I've ran netstat on the machine where
> > my software is running and I searched for tcp connections to my PGSQL
> > server. What i found was hundreds of lines like this:
>
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
>
> This is a network-level issue: the TCP stack on your machine knows the
> connection has been closed, but it hasn't seen an acknowledgement of
> that fact from the other machine, and so it's remembering the connection
> number so that it can definitively say "that connection is closed" if
> the other machine asks.  I'd guess that either you have a flaky network
> or there's something bogus about the TCP stack on the client machine.
> An occasional dropped FIN packet is no surprise, but hundreds of 'em
> are suspicious.

No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to
prevent any delayed packets from earlier in the connection causing problems
with a subsequent good connection. Otherwise you could get data from the old
connection mixed in the data for later ones.

> > Now could someone explain to me what this really means and what effect
> > it might have on the machine (the same machine where I ran this
> > query)? Would there eventually be a shortage of available ports if
> > this kept growing? The reason I am asking this is because one of my
> > modules was raising exception saying that TCP connection could not be
> > establish to a server it needed to connect to.

What it does indicate is that each query you're making is probably not just a
separate transaction but a separate TCP connection. That's probably not
necessary. If you have a single long-lived process you could just keep the TCP
connection open and issue a COMMIT after each transaction. That's what I would
recommend doing.


Unless you have thousands of these TIME_WAIT connections they probably aren't
actually directly the cause of your failure to establish connections. But yes
it can happen.

What's more likely happening here is that you're stressing the server by
issuing so many connection attempts that you're triggering some bug, either in
the TCP stack or Postgres that is causing some connection attempts to not be
handled properly.

I'm skeptical that there's a bug in Postgres since lots of people do in fact
run web servers configured to open a new connection for every page. But this
wouldn't happen to be a Windows server would it? Perhaps the networking code
in that port doesn't do the right thing in this case?

--
greg

Re: [NOVICE] Many connections lingering

От
Tom Lane
Дата:
Greg Stark <gsstark@mit.edu> writes:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>> This is a network-level issue: the TCP stack on your machine knows the
>> connection has been closed, but it hasn't seen an acknowledgement of
>> that fact from the other machine, and so it's remembering the connection
>> number so that it can definitively say "that connection is closed" if
>> the other machine asks.

> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout.

D'oh, obviously it's been too many years since I read Stevens ;-)

So AFAICS this status report doesn't actually indicate any problem,
other than massively profligate use of separate connections.  Greg's
correct that there's some risk of resource exhaustion at the TCP level,
but it's not very likely.  I'd be more concerned about the amount of
resources wasted in starting a separate Postgres backend for each
connection.  PG backends are fairly heavyweight objects --- if you
are at all concerned about performance, you want to get a decent number
of queries done in each connection.  Consider using a connection pooler.

            regards, tom lane

Re: [NOVICE] Many connections lingering

От
Slavisa Garic
Дата:
Hi Greg,

This is not a Windows server. Both server and client are the same
machine (done for testing purposes) and it is a Fedora RC2 machine.
This also happens on debian server and client in which case they were
two separate machines.

There are thousands (2+) of these waiting around and each one of them
dissapears after 50ish seconds. I tried psql command line and
monitored that connection in netstats. After I did a graceful exit
(\quit) the connection changed to TIME_WAIT and it was sitting there
for around 50 seconds. I thought I could do what you suggested with
having one connection and making each query a full BEGIN/QUERY/COMMIT
transaction but I thought I could avoid that :).

This is a serious problem for me as there are multiple users using our
software on our server and I would want to avoid having connections
open for a long time. In the scenario mentioned below I haven't
explained the magnitute of the communications happening between Agents
and DBServer. There could possibly be 100 or more Agents per
experiment, per user running on remote machines at the same time,
hence we need short transactions/pgsql connections. Agents need a
reliable connection because failure to connect could mean a loss of
computation results that were gathered over long periods of time.

Thanks for the help by the way :),
Regards,
Slavisa

On 12 Apr 2005 23:27:09 -0400, Greg Stark <gsstark@mit.edu> wrote:
>
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>
> > Slavisa Garic <sgaric@gmail.com> writes:
> > > ... Now, the
> > > interesting behaviour is this. I've ran netstat on the machine where
> > > my software is running and I searched for tcp connections to my PGSQL
> > > server. What i found was hundreds of lines like this:
> >
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39504 TIME_WAIT
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:40720 TIME_WAIT
> > > tcp        0      0 remus.dstc.monash:43001 remus.dstc.monash:39135 TIME_WAIT
> >
> > This is a network-level issue: the TCP stack on your machine knows the
> > connection has been closed, but it hasn't seen an acknowledgement of
> > that fact from the other machine, and so it's remembering the connection
> > number so that it can definitively say "that connection is closed" if
> > the other machine asks.  I'd guess that either you have a flaky network
> > or there's something bogus about the TCP stack on the client machine.
> > An occasional dropped FIN packet is no surprise, but hundreds of 'em
> > are suspicious.
>
> No, what Tom's describing is a different pair of states called FIN_WAIT_1 and
> FIN_WAIT_2. TIME_WAIT isn't waiting for a packet, just a timeout. This is to
> prevent any delayed packets from earlier in the connection causing problems
> with a subsequent good connection. Otherwise you could get data from the old
> connection mixed in the data for later ones.
>
> > > Now could someone explain to me what this really means and what effect
> > > it might have on the machine (the same machine where I ran this
> > > query)? Would there eventually be a shortage of available ports if
> > > this kept growing? The reason I am asking this is because one of my
> > > modules was raising exception saying that TCP connection could not be
> > > establish to a server it needed to connect to.
>
> What it does indicate is that each query you're making is probably not just a
> separate transaction but a separate TCP connection. That's probably not
> necessary. If you have a single long-lived process you could just keep the TCP
> connection open and issue a COMMIT after each transaction. That's what I would
> recommend doing.
>
> Unless you have thousands of these TIME_WAIT connections they probably aren't
> actually directly the cause of your failure to establish connections. But yes
> it can happen.
>
> What's more likely happening here is that you're stressing the server by
> issuing so many connection attempts that you're triggering some bug, either in
> the TCP stack or Postgres that is causing some connection attempts to not be
> handled properly.
>
> I'm skeptical that there's a bug in Postgres since lots of people do in fact
> run web servers configured to open a new connection for every page. But this
> wouldn't happen to be a Windows server would it? Perhaps the networking code
> in that port doesn't do the right thing in this case?
>
> --
> greg
>
>

Re: [NOVICE] Many connections lingering

От
Mark Lewis
Дата:
If there are potentially hundreds of clients at a time, then you may be
running into the maximum connection limit.

In postgresql.conf, there is a max_connections setting which IIRC
defaults to 100.  If you try to open more concurrent connections to the
backend than that, you will get a connection refused.

If your DB is fairly gnarly and your performance needs are minimal it
should be safe to increase max_connections.  An alternative approach
would be to add some kind of database broker program.  Instead of each
agent connecting directly to the database, they could pass their data to
a broker, which could then implement connection pooling.

-- Mark Lewis

On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
> This is a serious problem for me as there are multiple users using our
> software on our server and I would want to avoid having connections
> open for a long time. In the scenario mentioned below I haven't
> explained the magnitute of the communications happening between Agents
> and DBServer. There could possibly be 100 or more Agents per
> experiment, per user running on remote machines at the same time,
> hence we need short transactions/pgsql connections. Agents need a
> reliable connection because failure to connect could mean a loss of
> computation results that were gathered over long periods of time.



Re: [NOVICE] Many connections lingering

От
John DeSoi
Дата:
On Apr 13, 2005, at 1:09 AM, Slavisa Garic wrote:

> This is not a Windows server. Both server and client are the same
> machine (done for testing purposes) and it is a Fedora RC2 machine.
> This also happens on debian server and client in which case they were
> two separate machines.
>
> There are thousands (2+) of these waiting around and each one of them
> dissapears after 50ish seconds. I tried psql command line and
> monitored that connection in netstats. After I did a graceful exit
> (\quit) the connection changed to TIME_WAIT and it was sitting there
> for around 50 seconds. I thought I could do what you suggested with
> having one connection and making each query a full BEGIN/QUERY/COMMIT
> transaction but I thought I could avoid that :).


If you do a bit of searching on TIME_WAIT you'll find this is a common
TCP/IP related problem, but the behavior is within the specs of the
protocol.  I don't know how to do it on Linux, but you should be able
to change TIME_WAIT to a shorter value. For the archives, here is a
pointer on changing TIME_WAIT on Windows:

http://www.winguides.com/registry/display.php/878/


John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


Re: [NOVICE] Many connections lingering

От
Richard Huxton
Дата:
Slavisa Garic wrote:
> This is a serious problem for me as there are multiple users using our
> software on our server and I would want to avoid having connections
> open for a long time. In the scenario mentioned below I haven't
> explained the magnitute of the communications happening between Agents
> and DBServer. There could possibly be 100 or more Agents per
> experiment, per user running on remote machines at the same time,
> hence we need short transactions/pgsql connections. Agents need a
> reliable connection because failure to connect could mean a loss of
> computation results that were gathered over long periods of time.

Plenty of others have discussed the technical reasons why you are seeing
these connection issues. If you find it difficult to change your way of
working, you might find the pgpool connection-pooling project useful:
   http://pgpool.projects.postgresql.org/

HTH
--
   Richard Huxton
   Archonet Ltd

Strange serialization problem

От
Mischa Sandberg
Дата:
I have a performance problem; I'd like any suggestions on where to continue
investigation.

A set of insert-only processes seems to serialize itself. :-(

The processes appear to be blocked on disk IO, and probably the table drive,
rather than the pg_xlog drive.

Each process is inserting a block of 10K rows into a table.
I'm guessing they are "serialized" because one process by itself takes 15-20
secs; running ten processes in parallel averages 100-150 secs (each), with
elapsed (wall) time  of 150-200 secs.

Polling pg_locks shows each process has (been granted) only the locks you would
expect. I RARELY see an Exclusive lock on an index, and then only on one index
at a time.

A sample from pg_locks:

TABLE/INDEX                  GRANTED PID  MODE
m_reason                           t 7340 AccessShare
message                            t 7340 AccessShare
message                            t 7340 RowExclusive
pk_message                         t 7340 AccessShare
tmp_message                        t 7340 AccessShare
("m_reason" is a one-row lookup table; see INSERT cmd below).

--------------------------
The query plan is quite reasonable (see below).

On a side note, this is the first app I've had to deal with that is sweet to
pg_xlog, but hammers the drive bearing the base table (3x the traffic).

"log_executor_stats" for a sample insert look reasonable (except the "elapsed"!)

! system usage stats:
! 308.591728 elapsed 3.480000 user 1.270000 system sec
! [4.000000 user 1.390000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 18212/15 [19002/418] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! buffer usage stats:
! Shared blocks:       9675 read,       8781 written, buffer hit rate = 97.66%
! Local  blocks:        504 read,         64 written, buffer hit rate = 0.00%
! Direct blocks:          0 read,          0 written

Summarized "ps" output for the above backend process, sampled every 5 secs,
shows it is 94% in the 'D' state, 3% in the 'S' state.

================
== BACKGROUND ==
================

**SOFTWARE
- PG 7.4.6, RedHat 8.

----------------------------------
**HARDWARE
Xeon 2x2 2.4GHz 2GB RAM
4 x 73GB SCSI; pg_xlog and base on separate drives.

----------------------------------
**APPLICATION

Six machines post batches of 10K messages to the PG db server.
Machine #nn generates its ID keys as "nn00000000001"::bigint etc.

Each process runs:
- "COPY tmp_message FROM STDIN" loads its own one-use TEMP table.
- " INSERT INTO message
    SELECT tmp.* FROM tmp_message AS tmp
    JOIN m_reason ON m_reason.name = tmp.reason
    LEFT JOIN message USING (ID) WHERE message.ID is null
      (check required because crash recovery logic requires idempotent insert)
  "DROP TABLE tmp_message"  --- call me paranoid, this is 7.4

The COPY step time is almost constant when #processes varies from 1 to 10.

----------------------------------
**POSTGRES
pg_autovacuum is running with default parameters.

Non-default GUC values:
checkpoint_segments            = 512
default_statistics_target      = 200
effective_cache_size           = 500000
log_min_duration_statement     = 1000
max_fsm_pages                  = 1000000
max_fsm_relations              = 1000
random_page_cost               = 1
shared_buffers                 = 10000
sort_mem                       = 16384
stats_block_level              = true
stats_command_string           = true
stats_row_level                = true
vacuum_mem                     = 65536
wal_buffers                    = 2000

Wal_buffers and checkpoint_segments look outrageous,
but were tuned for another process, that posts batches of 10000 6KB rows
in a single insert.
----------------------------------
TABLE/INDEX STATISTICS

----------------------------------
MACHINE STATISTICS

ps gives the backend process as >98% in (D) state, with <1% CPU.

A "top" snapshot:
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    2.0%    0.0%    0.8%   0.0%     0.0%   96.9%    0.0%
           cpu00    2.5%    0.0%    1.9%   0.0%     0.0%   95.4%    0.0%
           cpu01    1.7%    0.0%    0.1%   0.0%     0.3%   97.6%    0.0%
           cpu02    0.5%    0.0%    0.7%   0.0%     0.0%   98.6%    0.0%
           cpu03    3.1%    0.0%    0.5%   0.0%     0.0%   96.2%    0.0%
Mem:  2061552k av, 2041752k used,   19800k free,       0k shrd,   21020k buff

iostat reports that the $PGDATA/base drive is being worked but not overworked.
The pg_xlog drive is underworked:

       KBPS   TPS   KBPS   TPS   KBPS   TPS   KBPS   TPS
12:30      1     2    763    16     31     8   3336   269
12:40      5     3   1151    22      5     5   2705   320
                      ^pg_xlog^                  ^base^

The base drive has run as much as 10MBPS, 5K TPS.
----------------------------------
EXPLAIN ANALYZE output:
The plan is eminently reasonable. But there's no visible relationship
between the top level "actual time" and the "total runtime":

Nested Loop Left Join
                (cost=0.00..31109.64 rows=9980 width=351)
                (actual time=0.289..2357.346 rows=9980 loops=1)
  Filter: ("inner".id IS NULL)
  ->  Nested Loop
                (cost=0.00..735.56 rows=9980 width=351)
                (actual time=0.092..1917.677 rows=9980 loops=1)
        Join Filter: (("outer".name)::text = ("inner".reason)::text)
        ->  Seq Scan on m_reason r
                (cost=0.00..1.01 rows=1 width=12)
                (actual time=0.008..0.050 rows=1 loops=1)
        ->  Seq Scan on tmp_message t
                (cost=0.00..609.80 rows=9980 width=355)
                (actual time=0.067..1756.617 rows=9980 loops=1)
  ->  Index Scan using pk_message on message
                (cost=0.00..3.02 rows=1 width=8)
                (actual time=0.014..0.014 rows=0 loops=9980)
        Index Cond: ("outer".id = message.id)
Total runtime: 737401.687 ms

--
"Dreams come true, not free." -- S.Sondheim, ITW


Re: [NOVICE] Many connections lingering

От
Slavisa Garic
Дата:
Hi,

This looks very interesting. I'll give it a better look and see if the
performance penalties pgpool brings are not substantial in which case
this program could be very helpful,

Thanks for the hint,
Slavisa

On 4/14/05, Richard Huxton <dev@archonet.com> wrote:
> Slavisa Garic wrote:
> > This is a serious problem for me as there are multiple users using our
> > software on our server and I would want to avoid having connections
> > open for a long time. In the scenario mentioned below I haven't
> > explained the magnitute of the communications happening between Agents
> > and DBServer. There could possibly be 100 or more Agents per
> > experiment, per user running on remote machines at the same time,
> > hence we need short transactions/pgsql connections. Agents need a
> > reliable connection because failure to connect could mean a loss of
> > computation results that were gathered over long periods of time.
>
> Plenty of others have discussed the technical reasons why you are seeing
> these connection issues. If you find it difficult to change your way of
> working, you might find the pgpool connection-pooling project useful:
>    http://pgpool.projects.postgresql.org/
>
> HTH
> --
>    Richard Huxton
>    Archonet Ltd
>

Re: [NOVICE] Many connections lingering

От
Slavisa Garic
Дата:
HI Mark,

My DBServer module already serves as a broker. At the moment it opens
a new connection for every incoming Agent connection. I did it this
way because I wanted to leave synchronisation to PGSQL. I might have
to modify it a bit and use a shared, single connection for all agents.
I guess that is not a bad option I just have to ensure that the code
is not below par :),

Also thank for the postgresql.conf hint, that limit was pretty low on
our server so this might help a bit,

Regards,
Slavisa

On 4/14/05, Mark Lewis <mark.lewis@mir3.com> wrote:
> If there are potentially hundreds of clients at a time, then you may be
> running into the maximum connection limit.
>
> In postgresql.conf, there is a max_connections setting which IIRC
> defaults to 100.  If you try to open more concurrent connections to the
> backend than that, you will get a connection refused.
>
> If your DB is fairly gnarly and your performance needs are minimal it
> should be safe to increase max_connections.  An alternative approach
> would be to add some kind of database broker program.  Instead of each
> agent connecting directly to the database, they could pass their data to
> a broker, which could then implement connection pooling.
>
> -- Mark Lewis
>
> On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
> > This is a serious problem for me as there are multiple users using our
> > software on our server and I would want to avoid having connections
> > open for a long time. In the scenario mentioned below I haven't
> > explained the magnitute of the communications happening between Agents
> > and DBServer. There could possibly be 100 or more Agents per
> > experiment, per user running on remote machines at the same time,
> > hence we need short transactions/pgsql connections. Agents need a
> > reliable connection because failure to connect could mean a loss of
> > computation results that were gathered over long periods of time.
>
>