Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From: Peter Mount <peter(at)retep(dot)org(dot)uk>
To: Steve Wampler <swampler(at)noao(dot)edu>
Cc: Joseph Shraibman <jks(at)selectacast(dot)net>, "pgsql-interfaces(at)postgreSQL(dot)org" <pgsql-interfaces(at)postgresql(dot)org>
Subject: Re: JDBC Large ResultSet problem + BadTimeStamp Patch
Date: 2000-10-12 14:15:53
Message-ID: Pine.LNX.4.21.0010121513440.435-100000@maidast.demon.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-interfaces

On Thu, 12 Oct 2000, Steve Wampler wrote:

> Peter Mount wrote:
> >
> > On Wed, 11 Oct 2000, Steve Wampler wrote:
> >
> > > Ah, that probably explains why I've seen "tuple arrived before metadata"
> > > messages when I've got several apps talking through CORBA to a java app
> > > that connects to postgres. Do I need to synchronize both inserts and
> > > queries at the java app level to prevent this? (I was hoping that
> > > the BEGIN/END block in a transaction would be sufficient, but this makes
> > > it sound as though it isn't.)
> >
> > I think you may need to, although the existing thread locking in the
> > driver should prevent this. BEGIN/END is protecting the tables, but the
> > "tuple arrived before metadata" message is from the network protocol
> > (someone correct me at any point if I'm wrong).
> >
> > What happens at the moment is that when a query is issued by JDBC, a lock
> > is made against the network connection, and then the query is issued. Once
> > everything has been read, the lock is released. This mechanism should
> > prevent any one thread using the same network connection as another which
> > is already using it.
> >
> > Is your corba app under heavy load when this happens, or can it happen
> > with say 2-3 apps running?
>
> I'm not sure how to define heavy load, but I'd say yes - there were about
> 10 processes (spread across 3 machines) all talking corba to the app with
> the jdbc app to postgres. Two apps was doing block inserts while another 8
> were doing queries. I think there were around 100000 entries added in a
> 20-25minute time span, and there would have been queries accessing most
> of those during the same period (the DB acts both as an archive and as
> a cache between an instrument and the processes that analyze the instrument's
> data).

Hmmm, I think you may want to look at using a connection pool, especially
with 100k entries. I've just looked through my Corba books, and they all
seem to use some form of pool, so perhaps that's the assumed best way to
do it.

Peter

--
Peter T Mount peter(at)retep(dot)org(dot)uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Zeugswetter Andreas SB 2000-10-12 14:19:36 AW: AW: Reimplementing permission checks for rules
Previous Message Chris 2000-10-12 13:56:58 Re: Arrays and foreign keys

Browse pgsql-interfaces by date

  From Date Subject
Next Message Tom Lane 2000-10-12 15:04:04 Re: COPY BINARY to stdout
Previous Message Steve Wampler 2000-10-12 13:36:03 Re: JDBC Large ResultSet problem + BadTimeStamp Patch