Обсуждение: serialization failure why?

Поиск
Список
Период
Сортировка

serialization failure why?

От
Filipe Pina
Дата:
I have these 2 tables:

CREATE TABLE "stuff" ("number" integer NOT NULL PRIMARY KEY, "title" varchar(40) NOT NULL);
CREATE TABLE "stuff_ext" ("stuff_ptr_id" integer NOT NULL PRIMARY KEY, "extra" integer NOT NULL);
ALTER TABLE "stuff_ext" ADD CONSTRAINT "stuff_ext_stuff_ptr_id_5a4ee8edae53404b" FOREIGN KEY ("stuff_ptr_id") REFERENCES "stuff" ("number") DEFERRABLE INITIALLY DEFERRED;
CREATE SEQUENCE stuff_seq;

And then the function:

CREATE OR REPLACE FUNCTION create_stuff(number integer, title text)
    RETURNS integer AS $$
DECLARE
    a1 stuff;
    a2 stuff_ext;
BEGIN
    IF number IS NULL THEN
        number := nextval('stuff_seq');
    END IF;

    a1.number := number;
    a1.title := title;

    a2.stuff_ptr_id := a1.number;

    INSERT INTO stuff VALUES (a1.*);
    INSERT INTO stuff_ext VALUES (a2.*);

    RETURN number;
END
$$
LANGUAGE plpgsql;


The DB is configured for SERIALIZABLE transaction mode.

Now, if I can the function without passing number, such as:

select create_stuff(NULL,'title');

in 10 forked processes in a loop with a few iterations in each, I get quite a few SERIALIZATON FAILURE (sqlstate 40001).

If I comment out the "INSERT INTO stuff_ext" line, I don't get any.

How is the second insert causing serialize dependencies...?

The specific error messages vary between

ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.

and

ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on commit attempt with conflict in from prepared pivot.
HINT: The transaction might succeed if retried.

Thanks!

Re: serialization failure why?

От
Bill Moran
Дата:
On Tue, 16 Jun 2015 13:33:12 +0001
Filipe Pina <filipe.pina@impactzero.pt> wrote:

> I have these 2 tables:
>
> CREATE TABLE "stuff" ("number" integer NOT NULL PRIMARY KEY, "title"
> varchar(40) NOT NULL);
> CREATE TABLE "stuff_ext" ("stuff_ptr_id" integer NOT NULL PRIMARY KEY,
> "extra" integer NOT NULL);
> ALTER TABLE "stuff_ext" ADD CONSTRAINT
> "stuff_ext_stuff_ptr_id_5a4ee8edae53404b" FOREIGN KEY ("stuff_ptr_id")
> REFERENCES "stuff" ("number") DEFERRABLE INITIALLY DEFERRED;
> CREATE SEQUENCE stuff_seq;
>
> And then the function:
>
> CREATE OR REPLACE FUNCTION create_stuff(number integer, title text)
>     RETURNS integer AS $$
> DECLARE
>     a1 stuff;
>     a2 stuff_ext;
> BEGIN
>     IF number IS NULL THEN
>         number := nextval('stuff_seq');
>     END IF;
>
>     a1.number := number;
>     a1.title := title;
>
>     a2.stuff_ptr_id := a1.number;
>
>     INSERT INTO stuff VALUES (a1.*);
>     INSERT INTO stuff_ext VALUES (a2.*);
>
>     RETURN number;
> END
> $$
> LANGUAGE plpgsql;
>
>
> The DB is configured for SERIALIZABLE transaction mode.
>
> Now, if I can the function without passing number, such as:
>
> select create_stuff(NULL,'title');
>
> in 10 forked processes in a loop with a few iterations in each, I get
> quite a few SERIALIZATON FAILURE (sqlstate 40001).
>
> If I comment out the "INSERT INTO stuff_ext" line, I don't get any.
>
> How is the second insert causing serialize dependencies...?

I'm not sure this is correct, but I have a theory.

Essentially, PostgreSQL can't be sure that the foreign key will be
valid if the other transaction rolls back. i.e., what if the foreign
key is valid becuase the other transaction created the matching row,
and that other transaction then rolls back?

In other isolation modes, it can just wait for the appropriate lock to
free up, then see what happens. But in serializable mode it hits a
condition where it can't ensure serializability.

> The specific error messages vary between
>
> ERROR: could not serialize access due to read/write dependencies among
> transactions
> DETAIL: Reason code: Canceled on identification as a pivot, during
> commit attempt.
> HINT: The transaction might succeed if retried.
>
> and
>
> ERROR: could not serialize access due to read/write dependencies among
> transactions
> DETAIL: Reason code: Canceled on commit attempt with conflict in from
> prepared pivot.
> HINT: The transaction might succeed if retried.
>
> Thanks!


--
Bill Moran <wmoran@potentialtech.com>


Re: serialization failure why?

От
Filipe Pina
Дата:
Indeed Bill, if drop the foreign key constraint on stuff_ext table there are no failures at all...

But, since I can't remove the foreign key, how can I tell postgres that he can "trust" it then?
Because it's obvious (for the code reader at least), that the other transaction will not be responsible for creating the foreign record, it's that very same transaction that creates it (the previous insert). If the first had failed (because it already existed or due to some other failure), the second wouldn't even be executed, so the second shouldn't fail for serialization on the FK at least..

Is there something that can be annotated or passed in the insert to workaround this? I really need to reduce the retries I'm performing and this is one of the common cases I have (in different functions, FK serialization failures that shouldn't be a problem)..

On Qua, Jun 17, 2015 at 2:45 , Bill Moran <wmoran@potentialtech.com> wrote:
On Tue, 16 Jun 2015 13:33:12 +0001 Filipe Pina <filipe.pina@impactzero.pt> wrote:
I have these 2 tables: CREATE TABLE "stuff" ("number" integer NOT NULL PRIMARY KEY, "title" varchar(40) NOT NULL); CREATE TABLE "stuff_ext" ("stuff_ptr_id" integer NOT NULL PRIMARY KEY, "extra" integer NOT NULL); ALTER TABLE "stuff_ext" ADD CONSTRAINT "stuff_ext_stuff_ptr_id_5a4ee8edae53404b" FOREIGN KEY ("stuff_ptr_id") REFERENCES "stuff" ("number") DEFERRABLE INITIALLY DEFERRED; CREATE SEQUENCE stuff_seq; And then the function: CREATE OR REPLACE FUNCTION create_stuff(number integer, title text) RETURNS integer AS $$ DECLARE a1 stuff; a2 stuff_ext; BEGIN IF number IS NULL THEN number := nextval('stuff_seq'); END IF; a1.number := number; a1.title := title; a2.stuff_ptr_id := a1.number; INSERT INTO stuff VALUES (a1.*); INSERT INTO stuff_ext VALUES (a2.*); RETURN number; END $$ LANGUAGE plpgsql; The DB is configured for SERIALIZABLE transaction mode. Now, if I can the function without passing number, such as: select create_stuff(NULL,'title'); in 10 forked processes in a loop with a few iterations in each, I get quite a few SERIALIZATON FAILURE (sqlstate 40001). If I comment out the "INSERT INTO stuff_ext" line, I don't get any. How is the second insert causing serialize dependencies...?
I'm not sure this is correct, but I have a theory. Essentially, PostgreSQL can't be sure that the foreign key will be valid if the other transaction rolls back. i.e., what if the foreign key is valid becuase the other transaction created the matching row, and that other transaction then rolls back? In other isolation modes, it can just wait for the appropriate lock to free up, then see what happens. But in serializable mode it hits a condition where it can't ensure serializability.
The specific error messages vary between ERROR: could not serialize access due to read/write dependencies among transactions DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt. HINT: The transaction might succeed if retried. and ERROR: could not serialize access due to read/write dependencies among transactions DETAIL: Reason code: Canceled on commit attempt with conflict in from prepared pivot. HINT: The transaction might succeed if retried. Thanks!
--
Bill Moran <wmoran@potentialtech.com>

Re: serialization failure why?

От
Kevin Grittner
Дата:
Filipe Pina <filipe.pina@impactzero.pt> wrote:

> if drop the foreign key constraint on stuff_ext table there are
> no failures at all…

It is my recollection that we were excluding the queries used to
enforce referential integrity constraints from the conflict
tracking, so I am surprised you are seeing this.  What is the exact
version you are using (as reported by the version() function)?

I am at a conference this week, away from my normal development
environment; but I will take a look next week.

Kevin Grittner


Re: serialization failure why?

От
Filipe Pina
Дата:
Hi Kevin,

I have installed:
PostgreSQL 9.3.7 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit

But as you mentioned it should have been fixed in later versions, I've upgraded to 9.4:
PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit

Though I still get the failures on 9.4.4..

On Qua, Jun 17, 2015 at 12:52 , Kevin Grittner <kgrittn@ymail.com> wrote:
Filipe Pina <filipe.pina@impactzero.pt> wrote:
if drop the foreign key constraint on stuff_ext table there are no failures at all…
It is my recollection that we were excluding the queries used to enforce referential integrity constraints from the conflict tracking, so I am surprised you are seeing this. What is the exact version you are using (as reported by the version() function)? I am at a conference this week, away from my normal development environment; but I will take a look next week. Kevin Grittner

Re: serialization failure why?

От
Filipe Pina
Дата:
Indeed Bill, if drop the foreign key constraint on stuff_ext table I get 0 failures...

But, since I can't remove the foreign key, how can I tell postgres that he can "trust" it then?
Because it's obvious (for the code reader at least), that the other transaction will not be responsible for creating the foreign record, it's that very same transaction that creates it (the previous insert). If the first had failed (because it already existed or due to some other failure), the second wouldn't even be executed, so the second shouldn't fail for serialization on the FK at least..

Is there something that can be annotated or passed in the insert to workaround this? I really need to reduce the retries I'm performing and this is one of the common cases I have (in different functions, FK serialization failures that shouldn't be a problem)..

On Qua, Jun 17, 2015 at 2:45 , Bill Moran <wmoran@potentialtech.com> wrote:
On Tue, 16 Jun 2015 13:33:12 +0001 Filipe Pina <filipe.pina@impactzero.pt> wrote:
I have these 2 tables: CREATE TABLE "stuff" ("number" integer NOT NULL PRIMARY KEY, "title" varchar(40) NOT NULL); CREATE TABLE "stuff_ext" ("stuff_ptr_id" integer NOT NULL PRIMARY KEY, "extra" integer NOT NULL); ALTER TABLE "stuff_ext" ADD CONSTRAINT "stuff_ext_stuff_ptr_id_5a4ee8edae53404b" FOREIGN KEY ("stuff_ptr_id") REFERENCES "stuff" ("number") DEFERRABLE INITIALLY DEFERRED; CREATE SEQUENCE stuff_seq; And then the function: CREATE OR REPLACE FUNCTION create_stuff(number integer, title text) RETURNS integer AS $$ DECLARE a1 stuff; a2 stuff_ext; BEGIN IF number IS NULL THEN number := nextval('stuff_seq'); END IF; a1.number := number; a1.title := title; a2.stuff_ptr_id := a1.number; INSERT INTO stuff VALUES (a1.*); INSERT INTO stuff_ext VALUES (a2.*); RETURN number; END $$ LANGUAGE plpgsql; The DB is configured for SERIALIZABLE transaction mode. Now, if I can the function without passing number, such as: select create_stuff(NULL,'title'); in 10 forked processes in a loop with a few iterations in each, I get quite a few SERIALIZATON FAILURE (sqlstate 40001). If I comment out the "INSERT INTO stuff_ext" line, I don't get any. How is the second insert causing serialize dependencies...?
I'm not sure this is correct, but I have a theory. Essentially, PostgreSQL can't be sure that the foreign key will be valid if the other transaction rolls back. i.e., what if the foreign key is valid becuase the other transaction created the matching row, and that other transaction then rolls back? In other isolation modes, it can just wait for the appropriate lock to free up, then see what happens. But in serializable mode it hits a condition where it can't ensure serializability.
The specific error messages vary between ERROR: could not serialize access due to read/write dependencies among transactions DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt. HINT: The transaction might succeed if retried. and ERROR: could not serialize access due to read/write dependencies among transactions DETAIL: Reason code: Canceled on commit attempt with conflict in from prepared pivot. HINT: The transaction might succeed if retried. Thanks!
--
Bill Moran <wmoran@potentialtech.com>

Re: serialization failure why?

От
Filipe Pina
Дата:
Kevin, assuming you will have some time to confirm that it has been fixed in some version some time next week, I’ve compiled the test steps in http://pastebin.com/4Uqc2kPv

Thanks once again

On 17/06/2015, at 14:40, Filipe Pina <filipe.pina@impactzero.pt> wrote:

Hi Kevin,

I have installed:
PostgreSQL 9.3.7 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit

But as you mentioned it should have been fixed in later versions, I've upgraded to 9.4:
PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit

Though I still get the failures on 9.4.4..

On Qua, Jun 17, 2015 at 12:52 , Kevin Grittner <kgrittn@ymail.com> wrote:
Filipe Pina <filipe.pina@impactzero.pt> wrote:
if drop the foreign key constraint on stuff_ext table there are no failures at all…
It is my recollection that we were excluding the queries used to enforce referential integrity constraints from the conflict tracking, so I am surprised you are seeing this. What is the exact version you are using (as reported by the version() function)? I am at a conference this week, away from my normal development environment; but I will take a look next week. Kevin Grittner

Re: serialization failure why?

От
Simon Riggs
Дата:
On 17 June 2015 at 13:52, Kevin Grittner <kgrittn@ymail.com> wrote:
Filipe Pina <filipe.pina@impactzero.pt> wrote:

> if drop the foreign key constraint on stuff_ext table there are
> no failures at all…

It is my recollection that we were excluding the queries used to
enforce referential integrity constraints from the conflict
tracking, so I am surprised you are seeing this.  What is the exact
version you are using (as reported by the version() function)?

I don't see any mechanism for excluding anything from serializable checks, so I can't see how that would work.

I can't find any mention of serializability concerns in the RI code itself.

AFAIK it would be strange to exclude FK checks from serializability checks, since they represent a valid observation of an intermediate state. 

Mat Views are excluded but I don't understand why that should be the case. There is no documented explanation. 

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: serialization failure why?

От
Kevin Grittner
Дата:
Simon Riggs <simon@2ndQuadrant.com> wrote:
> On 17 June 2015 at 13:52, Kevin Grittner <kgrittn@ymail.com> wrote:
>> Filipe Pina <filipe.pina@impactzero.pt> wrote:

>>> if drop the foreign key constraint on stuff_ext table there are
>>> no failures at all…
>>
>> It is my recollection that we were excluding the queries used to
>> enforce referential integrity constraints from the conflict
>> tracking, so I am surprised you are seeing this.  What is the exact
>> version you are using (as reported by the version() function)?
>
> I don't see any mechanism for excluding anything from
> serializable checks, so I can't see how that would work.

It is a matter of where calls to PredicateLockXxx and
CheckForSerializableConflictXxx calls were inserted into, for
example, heap and index AM code.  At least I think we omitted
placing some at locations which were known to be used for RI
enforcement; but apparently some more generic code is exercised by
the RI trigger execution which can still trigger serialization
failures based on FKs.

> I can't find any mention of serializability concerns in the RI
> code itself.

It is mentioned in the README-SSI file.

> AFAIK it would be strange to exclude FK checks from
> serializability checks, since they represent a valid observation
> of an intermediate state.

The idea that this is OK is based on the observations in the paper
"Automating the Detection of Snapshot Isolation Anomalies" by
Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S.
Sudarshan[1].  To quote a key sentence from that paper:

| The database system ensures the preservation of some integrity
| constraints which are explicitly declared to the system in the
| schema definition, such as uniqueness of primary key and
| referential integrity.  Some of the SI anomalies are avoided due
| to the dbms enforcement of these constraints.

For all the gory details, please reference that paper.  It was
discussed during PostgreSQL SSI development, although I'm not sure
whether that made it to the lists or was off-list discussion amongst
those working on it at the time.  Basically, the anomalies are
avoided due to the enforcement of the constraints, and it is next
to impossible to generate a serialization failure instead of the
constraint failure due to the timings involved.  Since the
constraints are more narrowly targeted (always at the tuple level),
using SSI techniques would be redundant effort (hurting
performance) that could only generate false positives.  If you see
some exception to that which we missed, let's discuss.

Perhaps that paper should be cited in the source code and/or
README.

> Mat Views are excluded but I don't understand why that should be
> the case.  There is no documented explanation.

Good point; it should be documented.  Basically, since the matview
is a materialized copy of data from other relations from some prior
point in time, the race conditions caught by SSI would be trivial
compared to those likely to exist based on the elapsed time since
the last REFRESH; so it would be kind of silly to try to enforce
the more subtle interactions while ignoring the big, glaring,
obvious one.  It would be a bit like treating a laceration of
someone's hand when they were not breathing -- it's not the thing
to worry about.  As we enhance matviews to have associated
freshness information and especially once we use them like indexes
to optimize queries this will deserve a close look, as there is
likely to be something meaningful we can do at that time.

Anyway, it appears that there is more that could be done to avoid
generating serialization failures based on the actions of RI
enforcement triggers.  I don't think we can properly call it a bug,
since it doesn't allow an incorrect state to be created; but it is
clearly an opportunity for performance enhancement of the feature.
One of many, unfortunately.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


[1] http://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf

Re: serialization failure why?

От
Simon Riggs
Дата:
On 29 June 2015 at 21:13, Kevin Grittner <kgrittn@ymail.com> wrote:
Simon Riggs <simon@2ndQuadrant.com> wrote:
> On 17 June 2015 at 13:52, Kevin Grittner <kgrittn@ymail.com> wrote:
>> Filipe Pina <filipe.pina@impactzero.pt> wrote:

>>> if drop the foreign key constraint on stuff_ext table there are
>>> no failures at all…
>>
>> It is my recollection that we were excluding the queries used to
>> enforce referential integrity constraints from the conflict
>> tracking, so I am surprised you are seeing this.  What is the exact
>> version you are using (as reported by the version() function)?
>
> I don't see any mechanism for excluding anything from
> serializable checks, so I can't see how that would work.

It is a matter of where calls to PredicateLockXxx and
CheckForSerializableConflictXxx calls were inserted into, for
example, heap and index AM code.  At least I think we omitted
placing some at locations which were known to be used for RI
enforcement; but apparently some more generic code is exercised by
the RI trigger execution which can still trigger serialization
failures based on FKs.

> I can't find any mention of serializability concerns in the RI
> code itself.

It is mentioned in the README-SSI file.

> AFAIK it would be strange to exclude FK checks from
> serializability checks, since they represent a valid observation
> of an intermediate state.

The idea that this is OK is based on the observations in the paper
"Automating the Detection of Snapshot Isolation Anomalies" by
Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S.
Sudarshan[1].  To quote a key sentence from that paper:

So we are saying we can exclude FK checks from serialization, but we do not, yet.

Since the FK checks run with a special snapshot it should be simple to exclude them.
 
> Mat Views are excluded but I don't understand why that should be
> the case.  There is no documented explanation.

Good point; it should be documented.  Basically, since the matview
is a materialized copy of data from other relations from some prior
point in time, the race conditions caught by SSI would be trivial
compared to those likely to exist based on the elapsed time since
the last REFRESH; so it would be kind of silly to try to enforce
the more subtle interactions while ignoring the big, glaring,
obvious one.  It would be a bit like treating a laceration of
someone's hand when they were not breathing -- it's not the thing
to worry about.  As we enhance matviews to have associated
freshness information and especially once we use them like indexes
to optimize queries this will deserve a close look, as there is
likely to be something meaningful we can do at that time.

We should add that as a code comment.

Thanks for complete answers to those questions.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services