Обсуждение: JDBC: detecting lost backend; 8192 byte limit in queries

Поиск
Список
Период
Сортировка

JDBC: detecting lost backend; 8192 byte limit in queries

От
Ari Halberstadt
Дата:
Hi,

I've been fairly happily using the PostgreSQL JDBC driver. I've run into a
snag, though, when the backend dies. The driver never detects that the
socket to the backend is lost and it will throw IO exceptions whenever a
statement using an old connection is executed. This is a problem for me
because I keep connections open for long periods of time in a connection
pool. My temporary workaround for this has been to use a wrapper around the
SQL statement class that traps all exceptions and examines them to
determine if they indicate a lost connection. The messages associated with
lost connections start with the following strings:

    "The backend has broken the connection.",
    "IOError while reading from backend:",
    "Error flushing output:",
    "I/O Error:",

I selected these messages based on examining the Java source for the
driver. When my statement wrapper detects one of these errors while using
the PostgreSQL JDBC driver it then knows that the connection is lost and
calls the connection's close() method. What would be much nicer, and
eliminate this ugly hack from my code, is if the driver would notice when
one of these errors occurs and set its isClosed() flag to true.

The other thing that has been making me reconsider using PostgreSQL is the
8192 SQL statement limit. The problem is that I want to store large amounts
of data, e.g., large messages, but when I try to do an SQL insert
containing the data the driver throws an SQL exception because the
statement is too large. I realize this limit is in the postgres backend
sources (it appears, among other places, as a static size in the statement
parser), but wish there were some way around this so I could continue to
use PostgreSQL and not have to switch to mySQL with its lack of transaction
support.

So, to summarize:

- driver needs a fix to set the isClosed() flag when the connection to the
backend is lost

- how do I store more than 8192 bytes?

Versions/systems/etc.: PostgreSQL 6.4.2, Solaris JDK1.2; PostgreSQL
snapshot from (I think) January 1st on RHLinux 5.1 with Blackdown JDK
1.1.7v1a.

-- Ari Halberstadt mailto:ari@shore.net <http://www.magiccookie.com/>
PGP public key available at <http://www.magiccookie.com/pgpkey.txt>



Re: [INTERFACES] JDBC: detecting lost backend; 8192 byte limit in queries

От
Tom Lane
Дата:
Ari Halberstadt <ari@shore.net> writes:
> - how do I store more than 8192 bytes?

Eliminating the restriction on the textual length of a query is on my
to-do list.  It's probably not going to happen in time for 6.5, but
maybe for 6.6 (say, this summer).

There is a *separate* restriction on the stored size of a tuple,
namely that it not exceed 1 disk block --- which just happens to be
8K as well by default, but it's a different restriction on a different
kind of data.

If your objective is to be able to insert a text field longer than 8K
then both of these restrictions will get in your way.

There has been some talk of allowing tuples to span multiple disk
blocks, but I get the impression that it's still a ways from happening.
In the meantime it is possible to compile Postgres with a larger
block size; if 16K or so per tuple would solve your problem then that
is a feasible workaround.  (Would someone who's done that note for the
record exactly what to change?)

The query length limit appears as a constant in enough different places
that patching it up to a larger value would probably be pretty tedious
:-(.  I haven't tried it.

            regards, tom lane

Re: [INTERFACES] JDBC: detecting lost backend; 8192 byte limit in queries

От
Peter T Mount
Дата:
On Wed, 17 Feb 1999, Ari Halberstadt wrote:

> Hi,
>
> I've been fairly happily using the PostgreSQL JDBC driver. I've run into a
> snag, though, when the backend dies. The driver never detects that the
> socket to the backend is lost and it will throw IO exceptions whenever a
> statement using an old connection is executed. This is a problem for me
> because I keep connections open for long periods of time in a connection
> pool. My temporary workaround for this has been to use a wrapper around the
> SQL statement class that traps all exceptions and examines them to
> determine if they indicate a lost connection. The messages associated with
> lost connections start with the following strings:
>
>     "The backend has broken the connection.",
>     "IOError while reading from backend:",
>     "Error flushing output:",
>     "I/O Error:",
>
> I selected these messages based on examining the Java source for the
> driver. When my statement wrapper detects one of these errors while using
> the PostgreSQL JDBC driver it then knows that the connection is lost and
> calls the connection's close() method. What would be much nicer, and
> eliminate this ugly hack from my code, is if the driver would notice when
> one of these errors occurs and set its isClosed() flag to true.

The 6.5 driver has changed a lot from 6.4.x (mainly to enable compilation
under Java2), so I'm planning some cleanup over this weekend. I'll
probably add these in either tomorrow or Saturday.

Also, tomorrow, I'm looking at the the closed() code, as in some cases
it's causing some strange effects (was being discussed on the hackers list
yesterday about it).

> The other thing that has been making me reconsider using PostgreSQL is
> the 8192 SQL statement limit. The problem is that I want to store
> large amounts of data, e.g., large messages, but when I try to do an
> SQL insert containing the data the driver throws an SQL exception
> because the statement is too large. I realize this limit is in the
> postgres backend sources (it appears, among other places, as a static
> size in the statement parser), but wish there were some way around
> this so I could continue to use PostgreSQL and not have to switch to
> mySQL with its lack of transaction support.
>
> So, to summarize:
>
> - driver needs a fix to set the isClosed() flag when the connection to the
> backend is lost

As I said above, it will be in 6.5.

> - how do I store more than 8192 bytes?

I hope you mean per row? Two ways, either increasing the block size (can
only be done by recompiling the backend, and running initdb), or by using
large objects.

If you don't want to search the messages, then large objects would be the
best bet.

Another method, would be to split the messages into multiple rows, and add
a field to indicate the sequence.

> Versions/systems/etc.: PostgreSQL 6.4.2, Solaris JDK1.2; PostgreSQL
> snapshot from (I think) January 1st on RHLinux 5.1 with Blackdown JDK
> 1.1.7v1a.

Peter

--
       Peter T Mount peter@retep.org.uk
      Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres
 Java PDF Generator: http://www.retep.org.uk/pdf


Re: [INTERFACES] JDBC: detecting lost backend; 8192 byte limit in queries

От
Ari Halberstadt
Дата:
>On Wed, 17 Feb 1999, Ari Halberstadt wrote:
>> - how do I store more than 8192 bytes?
>...
>Another method, would be to split the messages into multiple rows, and add
>a field to indicate the sequence.

This is what I ended up doing (resurrected some old code I'd chucked).
Thanks for the help.

-- Ari Halberstadt mailto:ari@shore.net <http://www.magiccookie.com/>
PGP public key available at <http://www.magiccookie.com/pgpkey.txt>