On Mar 31, 2011, at 3:34 PM, Quartz wrote:
Besides that's what release notes are for. And I dare say, if they expected a transaction when using a batch with autocommit=true, it about time they learn their mistake. JDBC api is a contract. Can't make exception for postgres.
Quartz, the problem is that behavior of batch updates when autocommit=true is not spec-defined, it's implementation-defined. Just because MySQL does it one way doesn't make that the "right" way. Look at this post from 2009:
"The behavior after a failure is DBMS specific, as documented in Statement.executeBatch()<http://java.sun.com/javase/6/docs/api/java/sql/Statement.html#executeBatch()>. Some unit tests I've run had shown that MSSQL continues with the rest of the statements while Oracle aborts the batch immediately."
And reading through the JDBC guide, albeit for an older version, here:
"For this reason, autocommit should always be turned off when batch updates are done. The commit behavior of executeBatch is always implementation defined when an error occurs and autocommit is true."
And from the most recent JDBC tutorial, here:
"To allow for correct error handling, you should always disable auto-commit mode before beginning a batch update."
It seems to me that this is a case of you expecting behavior that is not spec-defined, but because your prior experience with MySQL has taught you to expect certain behavior, you expect that behavior to also be present in other drivers even though the spec does not clearly state the expected behavior (it explicitly states that it's implementation-defined). This is clear in the first post I linked to, where MSSQL continues (as does MySQL from your admission), but Oracle aborts.
-- Jeff Hubbach
In response to
pgsql-jdbc by date
|Next:||From: Oliver Jowett||Date: 2011-03-31 22:20:57|
|Subject: Re: Re-read of updated row using scrollable ResultSet returns
|Previous:||From: Quartz||Date: 2011-03-31 21:34:24|
|Subject: Re: JDBC gripe list (the autocommit subthread)|