Re: JDBC Large ResultSet problem + BadTimeStamp Patch

Поиск
Список
Период
Сортировка
От Peter Mount
Тема Re: JDBC Large ResultSet problem + BadTimeStamp Patch
Дата
Msg-id Pine.LNX.4.21.0010121513440.435-100000@maidast.demon.co.uk
обсуждение исходный текст
Ответ на Re: JDBC Large ResultSet problem + BadTimeStamp Patch  (Steve Wampler <swampler@noao.edu>)
Список pgsql-interfaces
On Thu, 12 Oct 2000, Steve Wampler wrote:

> Peter Mount wrote:
> > 
> > On Wed, 11 Oct 2000, Steve Wampler wrote:
> > 
> > > Ah, that probably explains why I've seen "tuple arrived before metadata"
> > > messages when I've got several apps talking through CORBA to a java app
> > > that connects to postgres.  Do I need to synchronize both inserts and
> > > queries at the java app level to prevent this?  (I was hoping that
> > > the BEGIN/END block in a transaction would be sufficient, but this makes
> > > it sound as though it isn't.)
> > 
> > I think you may need to, although the existing thread locking in the
> > driver should prevent this. BEGIN/END is protecting the tables, but the
> > "tuple arrived before metadata" message is from the network protocol
> > (someone correct me at any point if I'm wrong).
> > 
> > What happens at the moment is that when a query is issued by JDBC, a lock
> > is made against the network connection, and then the query is issued. Once
> > everything has been read, the lock is released. This mechanism should
> > prevent any one thread using the same network connection as another which
> > is already using it.
> > 
> > Is your corba app under heavy load when this happens, or can it happen
> > with say 2-3 apps running?
> 
> I'm not sure how to define heavy load, but I'd say yes - there were about
> 10 processes (spread across 3 machines) all talking corba to the app with
> the jdbc app to postgres.  Two apps was doing block inserts while another 8
> were doing queries.  I think there were around 100000 entries added in a
> 20-25minute time span, and there would have been queries accessing most
> of those during the same period (the DB acts both as an archive and as
> a cache between an instrument and the processes that analyze the instrument's
> data).

Hmmm, I think you may want to look at using a connection pool, especially
with 100k entries. I've just looked through my Corba books, and they all
seem to use some form of pool, so perhaps that's the assumed best way to
do it.

Peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




В списке pgsql-interfaces по дате отправления:

Предыдущее
От: Steve Wampler
Дата:
Сообщение: Re: JDBC Large ResultSet problem + BadTimeStamp Patch
Следующее
От: Tom Lane
Дата:
Сообщение: Re: COPY BINARY to stdout