Re: JDBC behaviour

Поиск
Список
Период
Сортировка
От Dave Cramer
Тема Re: JDBC behaviour
Дата
Msg-id CADK3HH+L_ym+7yPH51joS4RUEw07sLX=0VihK1xdff-GRU+y-A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: JDBC behaviour  (Thomas Kellerer <spam_eater@gmx.net>)
Список pgsql-jdbc
This is really just a case of: 

OP did not fully understand the transaction semantics of PostgreSQL.

OP wrote a bunch of code under the assumption that the transaction semantics worked the way he thought it would work

OP considered this a bug

As you can see from the spec below the behaviour is not specified, however we are compliant. We do not continue processing after the first failed execution and we return an empty array.

14.1.3 Handling Failures during Execution A JDBC driver may or may not continue processing the remaining commands in a batch once execution of a command fails. However, a JDBC driver must always provide the same behavior with a particular data source. For example, a driver cannot continue processing after a failure for one batch and not continue processing for another batch. If a driver stops processing after the first failure, the array returned by the method BatchUpdateException.getUpdateCounts will always contain fewer entries than there were statements in the batch. Since statements are executed in the order that they are added to the batch, if the array contains N elements, this means that the first N elements in the batch were processed successfully when executeBatch was called. When a driver continues processing in the presence of failures, the number of elements in the array returned by the method BatchUpdateException.getUpdateCounts always equals the number of commands in the batch. When a BatchUpdateException object is thrown and the driver continues processing after a failure, the array of update counts will contain the following BatchUpdateException constant: 

JDBC 4.1 Specification • July 2011 ■ Statement.EXECUTE_FAILED — the command failed to execute successfully. This value is also returned for commands that could not be processed for some reason—such commands fail implicitly. JDBC drivers that do not continue processing after a failure never return Statement.EXECUTE_FAILED in an update count array. Drivers of this type simply return a status array containing an entry for each command that was processed successfully. A JDBC technology-based application can distinguish a JDBC driver that continues processing after a failure from one that does not by examining the size of the array returned by BatchUpdateException.getUpdateCounts. A JDBC driver that continues processing always returns an array containing one entry for each element in the batch. A JDBC driver that does not continue processing after a failure will always return an array whose number of entries is less than the number of commands in the batch.




On 20 February 2016 at 09:51, Thomas Kellerer <spam_eater@gmx.net> wrote:
Craig Ringer schrieb am 20.02.2016 um 11:44:
Please provide a complete, compileable, self-contained example demonstrating behaviour that
causes a failure or problem in PgJDBC but works correctly with at least most of:

- MS SQL
- Oracle
- DB2
- Sybase
- MySQL

including test run output demonstrating the details of what exactly the behaviour of each other implementation is.

Disclaimer: I do agree with you that Postgres behaviour is correct. A transaction either succeeds completely or not at all.

Having said that: Postgres' behaviour *is* unique regarding this.

Consider the following table:

  create table x (id integer not null primary key);

The following code will run with Oracle, DB2, SQL Sever, MySQL and Firebird:

---- code start ----

    public class TestInsert
    {
      public static void main(String args[])
        throws Exception
      {
        Connection con = DriverManager.getConnection("...", "...", "...");
        con.setAutoCommit(false);

        PreparedStatement pstmt = con.prepareStatement("insert into x (id) values (?)");

        pstmt.setInt(1, 1);
        pstmt.executeUpdate();

        try
        {
          pstmt.setInt(1, 1);
          pstmt.executeUpdate();
        }
        catch (Exception ex)
        {
          System.out.println("***** Error: " + ex.getMessage());
        }

        System.out.println("trying second row");

        pstmt.setInt(1, 2);
        pstmt.executeUpdate();

        con.commit();

        ResultSet rs = con.createStatement().executeQuery("select count(*) from x");
        if (rs.next())
        {
          int rows = rs.getInt(1);
          System.out.println("rows: " + rows);
        }
        con.close();
      }
    }

---- code end ----

With Oracle DB2, SQL Server, MySQL and Firebird you'll get something like this
(this is from Oracle, the error message will of course differ for the others)

  ***** Error: ORA-00001: Unique Constraint (THOMAS.SYS_C0021623) violated
  trying second row
  rows: 2


With Postgres you get:

  ***** Error: ERROR: duplicate key value violates unique constraint "x_pkey"
    Detail: Key (id)=(1) already exists.
  trying second row
  Exception in thread "main" org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block


But again: in contrast to the OP I do not consider this a bug, it's just that Postgres behaves differently.

But I understand that it can be a problem when migrating applications that have a sloppy way of dealing with transactions.

I was part in the migration of several projects from Oracle and MySQL to Postgres in the last two years, none of them had problems regarding that.

Thomas





--
Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-jdbc

В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Thomas Kellerer
Дата:
Сообщение: Re: JDBC behaviour
Следующее
От: "David G. Johnston"
Дата:
Сообщение: Re: JDBC behaviour