Re: [INTERFACES] JDBC: detecting lost backend; 8192 byte limit in queries

From: Peter T Mount <peter(at)retep(dot)org(dot)uk>
To: Ari Halberstadt <ari(at)shore(dot)net>
Cc: pgsql-interfaces(at)postgresql(dot)org
Subject: Re: [INTERFACES] JDBC: detecting lost backend; 8192 byte limit in queries
Date: 1999-02-18 18:41:57
Message-ID: Pine.LNX.4.04.9902181823560.8265-100000@maidast.retep.org.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

On Wed, 17 Feb 1999, Ari Halberstadt wrote:

> Hi,
>
> I've been fairly happily using the PostgreSQL JDBC driver. I've run into a
> snag, though, when the backend dies. The driver never detects that the
> socket to the backend is lost and it will throw IO exceptions whenever a
> statement using an old connection is executed. This is a problem for me
> because I keep connections open for long periods of time in a connection
> pool. My temporary workaround for this has been to use a wrapper around the
> SQL statement class that traps all exceptions and examines them to
> determine if they indicate a lost connection. The messages associated with
> lost connections start with the following strings:
>
> "The backend has broken the connection.",
> "IOError while reading from backend:",
> "Error flushing output:",
> "I/O Error:",
>
> I selected these messages based on examining the Java source for the
> driver. When my statement wrapper detects one of these errors while using
> the PostgreSQL JDBC driver it then knows that the connection is lost and
> calls the connection's close() method. What would be much nicer, and
> eliminate this ugly hack from my code, is if the driver would notice when
> one of these errors occurs and set its isClosed() flag to true.

The 6.5 driver has changed a lot from 6.4.x (mainly to enable compilation
under Java2), so I'm planning some cleanup over this weekend. I'll
probably add these in either tomorrow or Saturday.

Also, tomorrow, I'm looking at the the closed() code, as in some cases
it's causing some strange effects (was being discussed on the hackers list
yesterday about it).

> The other thing that has been making me reconsider using PostgreSQL is
> the 8192 SQL statement limit. The problem is that I want to store
> large amounts of data, e.g., large messages, but when I try to do an
> SQL insert containing the data the driver throws an SQL exception
> because the statement is too large. I realize this limit is in the
> postgres backend sources (it appears, among other places, as a static
> size in the statement parser), but wish there were some way around
> this so I could continue to use PostgreSQL and not have to switch to
> mySQL with its lack of transaction support.
>
> So, to summarize:
>
> - driver needs a fix to set the isClosed() flag when the connection to the
> backend is lost

As I said above, it will be in 6.5.

> - how do I store more than 8192 bytes?

I hope you mean per row? Two ways, either increasing the block size (can
only be done by recompiling the backend, and running initdb), or by using
large objects.

If you don't want to search the messages, then large objects would be the
best bet.

Another method, would be to split the messages into multiple rows, and add
a field to indicate the sequence.

> Versions/systems/etc.: PostgreSQL 6.4.2, Solaris JDK1.2; PostgreSQL
> snapshot from (I think) January 1st on RHLinux 5.1 with Blackdown JDK
> 1.1.7v1a.

Peter

--
Peter T Mount peter(at)retep(dot)org(dot)uk
Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres
Java PDF Generator: http://www.retep.org.uk/pdf

In response to

Responses

Browse pgsql-interfaces by date

  From Date Subject
Next Message Byron Nikolaidis 1999-02-18 20:53:04 Re: [INTERFACES] psqlodbc under Win3.11 with win32s
Previous Message Dave Page 1999-02-18 17:02:51 Trigger Information