Tom Lane wrote:
>
> Hiroshi Inoue <Inoue@tpf.co.jp> writes:
> > In Mikko's both examples, Transaction 1 has to see a tuple
> > which didn't exist at the beginning of the transaction
> > while checking duplicate error.
>
> I'm not sure about that. What if the "tuple that didn't exist" is
> an updated version of a row that did exist earlier --- that is, the
> conflicting operation is an update not an insert? Does your answer
> change depending on whether the update changed the row's key value?
Displaying both "can't serialize .." and "cannot insert a
duplicate .." seems better. There's another case.
# create table t (id int4 primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit
index 't_pkey' for table 't'
CREATE
# insert into t values (1);
INSERT 1481246 1
[session-1]
visco=# begin;
BEGIN
visco=# set transaction isolation level serializable;
SET VARIABLE
visco=# select * from t;
id
----
1
(1 row)
[session-2]
# delete from t;
DELETE 1
[session-1]
# insert into t values (2);
INSERT 1481247 1
IMHO this should cause a "can't serialize .." error.
>
> In the most straightforward implementation of your suggestion, I believe
> that a concurrent update (on a non-key column) would result in the
> system randomly delivering either "duplicate key" or "serialization
> error" depending on whether the index processing happened to look at
> the older or newer other tuple first.
It depends on what *happened to look* means.
Currently PostgreSQL ignores the update/deleted
tuples from the first which don't satisfy the
qualification at the point of snapshot.
I mean such tuples by *happened to look*.
regards,
Hiroshi Inoue