From: | Paul Thomas <paul(at)tmsl(dot)demon(dot)co(dot)uk> |
---|---|
To: | "pgsql-jdbc (at) postgresql (dot) org" <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: CVS JDBC driver will try to use server-side-prepare on unpreparable SQL |
Date: | 2003-08-15 15:22:17 |
Message-ID: | 20030815162217.A5328@bacon |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
On 15/08/2003 13:17 Oliver Jowett wrote:
> It looks like the driver is trying to use server-side prepare on SQL that
> it
> won't work on:
>
> Aug 16 00:05:40 flood postgres[12989]: [12-1] LOG: query: PREPARE
> JDBC_STATEMENT_4 AS CREATE TABLE
> "testBigDB/persisted_testBigDB/persisted_one"(
> pk BYTEA NOT
> Aug 16 00:05:40 flood postgres[12989]: [12-2] NULL, generation INT8
> NOT NULL, data BYTEA NOT NULL, CONSTRAINT
> Aug 16 00:05:40 flood postgres[12989]: [12-3]
> "pkey_testBigDB/persisted_testBigDB/persisted_one" PRIMARY KEY (pk));
> EXECUTE JDBC_STATEMENT_4
> Aug 16 00:05:40 flood postgres[12989]: [13] ERROR: parser: parse error
> at or near "CREATE" at character 29
>
> This then turns up as a SQLException on the java side.
>
> Yes, I know, "don't do that then!", but isn't the plan to default to
> server-side prepare eventually?
IMHO, there are some issues to be sorted out first:
a) making them work with connection pools. Currently (as in released not
CVS tip) the server-side statement is deallocated when the JDBC statement
is closed. I was looking at this a few weeks ago whilst experimenting with
server-side prepares. Somehow, the back-end statement names need to be
cached so the statements can remain "alive" after the JDBC statement has
been closed (and also deallcated when connections are closed).
b) as I understand it, the server-side prepared statement might end up
using a sub-optimal plan after some time due to effect of the
inserts/updates/deletes. This could be important in web applications
(which would probably be using connection pooling too) where a statement
might stay prepared for weeks or even months.
> Should we only be doing PREPARE on queries that are known to be safe
> (e.g.
> single-statement SELECTs), or is it better to try to catch the errors and
> abandon the prepare? (more general, but sounds a bit hairy).
Hopefully SELECT, INSERT, UPDATE and DELETE should all be ok. The test I
tried was with an INSERT and that worked very well - 1000 per second
compared to 420 per second without server side prepares. If this test was
anything like representative, then there are big performance gains here.
>
> The reason that this came up is I'm modifying the driver to allow
> server-side prepare to be toggled at the connection- and datasource-
> level.
> Patches for that to follow once I've sorted this problem out.
Does that include having a parameter in the URL? Some people might find
that very useful as they could then use server-side prepares without
having to import any postgres-specific classes and for connection pooling,
I think it would be essential.
rgds
--
Paul Thomas
+------------------------------+---------------------------------------------+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants |
http://www.thomas-micro-systems-ltd.co.uk |
+------------------------------+---------------------------------------------+
From | Date | Subject | |
---|---|---|---|
Next Message | sarathi | 2003-08-15 15:37:04 | - trigger/function & java methods |
Previous Message | Peter Royal | 2003-08-15 15:13:28 | Debugging I/O errors occured while reading from backend |