Обсуждение: SERIALIZABLE and INSERTs with multiple VALUES

Поиск
Список
Период
Сортировка

SERIALIZABLE and INSERTs with multiple VALUES

От
Jason Dusek
Дата:

Hi All,

I notice the following oddity:

 =# CREATE TABLE with_pk (i integer PRIMARY KEY);
CREATE TABLE
=# BEGIN;
BEGIN=# INSERT INTO with_pk VALUES (1) ON CONFLICT DO NOTHING;
INSERT 0 1=# INSERT INTO with_pk VALUES (1) ON CONFLICT DO NOTHING;
INSERT 0 0=# END;
COMMIT
=# BEGIN;
BEGIN=# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
ERROR:  could not serialize access due to concurrent update=# END;
ROLLBACK

How are these two transactions different?

Kind Regards,

  Jason Dusek

Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek <jason.dusek@gmail.com> wrote:

> I notice the following oddity:

>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
> CREATE TABLE

>  =# BEGIN;
> BEGIN
>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
> ERROR:  could not serialize access due to concurrent update
>  =# END;
> ROLLBACK

I don't see that on development HEAD.  What version are you
running?  What is your setting for default_transaction_isolation?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Jason Dusek
Дата:
SELECT version(),      (SELECT setting FROM pg_settings WHERE name = 'default_transaction_deferrable') AS default_transaction_deferrable,      (SELECT setting FROM pg_settings WHERE name = 'default_transaction_isolation') AS default_transaction_isolation;
─[ RECORD 1 ]──────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────
version                        │ PostgreSQL 9.5.4 on x86_64-apple-darwin15.6.0, compiled by Apple LLVM version 8.0.0 (clang-800.0.38), 64-bit
default_transaction_deferrable │ on
default_transaction_isolation  │ serializable

On Tue, 11 Oct 2016 at 13:00 Kevin Grittner <kgrittn@gmail.com> wrote:
On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek <jason.dusek@gmail.com> wrote:

> I notice the following oddity:

>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
> CREATE TABLE

>  =# BEGIN;
> BEGIN
>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
> ERROR:  could not serialize access due to concurrent update
>  =# END;
> ROLLBACK

I don't see that on development HEAD.  What version are you
running?  What is your setting for default_transaction_isolation?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Albe Laurenz
Дата:
Kevin Grittner wrote:
> Sent: Tuesday, October 11, 2016 10:00 PM
> To: Jason Dusek
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES
> 
> On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek <jason.dusek@gmail.com> wrote:
> 
>> I notice the following oddity:
> 
>>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
>> CREATE TABLE
> 
>>  =# BEGIN;
>> BEGIN
>>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>> ERROR:  could not serialize access due to concurrent update
>>  =# END;
>> ROLLBACK
> 
> I don't see that on development HEAD.  What version are you
> running?  What is your setting for default_transaction_isolation?

The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:

test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
CREATE TABLE
test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION
test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
ERROR:  could not serialize access due to concurrent update

Yours,
Laurenz Albe

Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Thomas Munro
Дата:
On Wed, Oct 12, 2016 at 8:50 PM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:
> Kevin Grittner wrote:
>> On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek <jason.dusek@gmail.com> wrote:
>>> I notice the following oddity:
>>
>>>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
>>> CREATE TABLE
>>
>>>  =# BEGIN;
>>> BEGIN
>>>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>>> ERROR:  could not serialize access due to concurrent update
>>>  =# END;
>>> ROLLBACK
>>
>> I don't see that on development HEAD.  What version are you
>> running?  What is your setting for default_transaction_isolation?
>
> The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:
>
> test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
> CREATE TABLE
> test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> START TRANSACTION
> test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
> ERROR:  could not serialize access due to concurrent update

This happens in both SERIALIZABLE and REPEATABLE READ when a single
command inserts conflicting rows with an ON CONFLICT cause, and it
comes from the check in ExecCheckHeapTupleVisible whose comment says:

/*
 * ExecCheckHeapTupleVisible -- verify heap tuple is visible
 *
 * It would not be consistent with guarantees of the higher isolation levels to
 * proceed with avoiding insertion (taking speculative insertion's alternative
 * path) on the basis of another tuple that is not visible to MVCC snapshot.
 * Check for the need to raise a serialization failure, and do so as necessary.
 */

So it seems to be working as designed.  Perhaps someone could argue
that you should make an exception for tuples inserted by the current
command.

--
Thomas Munro
http://www.enterprisedb.com


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Vitaly Burovoy
Дата:
On 10/12/16, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
> On Wed, Oct 12, 2016 at 8:50 PM, Albe Laurenz <laurenz.albe@wien.gv.at>
> wrote:
>> Kevin Grittner wrote:
>>> On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek <jason.dusek@gmail.com>
>>> wrote:
>>>> I notice the following oddity:
>>>
>>>>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
>>>> CREATE TABLE
>>>
>>>>  =# BEGIN;
>>>> BEGIN
>>>>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>>>> ERROR:  could not serialize access due to concurrent update
>>>>  =# END;
>>>> ROLLBACK
>>>
>>> I don't see that on development HEAD.  What version are you
>>> running?  What is your setting for default_transaction_isolation?
>>
>> The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:
>>
>> test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
>> CREATE TABLE
>> test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>> START TRANSACTION
>> test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>> ERROR:  could not serialize access due to concurrent update
>
> This happens in both SERIALIZABLE and REPEATABLE READ when a single
> command inserts conflicting rows with an ON CONFLICT cause, and it
> comes from the check in ExecCheckHeapTupleVisible whose comment says:
>
> /*
>  * ExecCheckHeapTupleVisible -- verify heap tuple is visible
>  *
>  * It would not be consistent with guarantees of the higher isolation levels
> to
>  * proceed with avoiding insertion (taking speculative insertion's
> alternative
>  * path) on the basis of another tuple that is not visible to MVCC
> snapshot.
>  * Check for the need to raise a serialization failure, and do so as
> necessary.
>  */
>
> So it seems to be working as designed.  Perhaps someone could argue
> that you should make an exception for tuples inserted by the current
> command.

I disagree. It is designed to prevent updating a tuple which was
updated in a parallel transaction which has been just committed.
This case is a little bit different: this tuple has been inserted in
the same transaction in the same command.
I think it is an obvious bug because there is no "concurrent update".
There is no update at all.

ON CONFLICT handling just does not cover all possible ways which can happen.
Normally (without "ON CONFLICT" clause) INSERT raises "duplicate key
value violates unique constraint" and doesn't run to
"ExecCheckHeapTupleVisible" check.
The "ExecInsert" handles constraint checks but not later checks like
ExecCheckHeapTupleVisible.

--
Best regards,
Vitaly Burovoy


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 2:50 AM, Albe Laurenz <laurenz.albe@wien.gv.at> wrote:
> Kevin Grittner wrote:

>> I don't see that on development HEAD.  What version are you
>> running?  What is your setting for default_transaction_isolation?
>
> The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:

Oh, I see -- it doesn't happen if the row already exists at the
start of the transaction, which it does if you run the OP's entire
sample.  If you skip the transaction in the middle of his sample,
the error occurs.

> test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
> CREATE TABLE
> test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> START TRANSACTION
> test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
> ERROR:  could not serialize access due to concurrent update

Or that, as a nice, self-contained test case.  :-)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
[adding Peter Geoghegan to the email addresses]

On Wed, Oct 12, 2016 at 7:11 AM, Vitaly Burovoy <vitaly.burovoy@gmail.com> wrote:
> On 10/12/16, Thomas Munro <thomas.munro@enterprisedb.com> wrote:

>> This happens in both SERIALIZABLE and REPEATABLE READ when a single
>> command inserts conflicting rows with an ON CONFLICT cause, and it
>> comes from the check in ExecCheckHeapTupleVisible whose comment says:
>>
>> /*
>>  * ExecCheckHeapTupleVisible -- verify heap tuple is visible
>>  *
>>  * It would not be consistent with guarantees of the higher isolation levels to
>>  * proceed with avoiding insertion (taking speculative insertion's alternative
>>  * path) on the basis of another tuple that is not visible to MVCC snapshot.
>>  * Check for the need to raise a serialization failure, and do so as necessary.
>>  */
>>
>> So it seems to be working as designed.  Perhaps someone could argue
>> that you should make an exception for tuples inserted by the current
>> command.
>
> I disagree. It is designed to prevent updating a tuple which was
> updated in a parallel transaction which has been just committed.
> This case is a little bit different: this tuple has been inserted in
> the same transaction in the same command.
> I think it is an obvious bug because there is no "concurrent update".
> There is no update at all.

Yeah, the concept of a serialization failure is that the
transaction would have succeeded except for the actions of a
concurrent transaction.  It is supposed to indicate that a retry
has a chance to succeed, because the conflicting transaction is
likely to have completed.  To generate a "serialization failure"
(or, IMO, any error with a SQLSTATE in class "40") from the actions
of a single transaction is completely wrong, and should be
considered a bug.

> ON CONFLICT handling just does not cover all possible ways which can happen.
> Normally (without "ON CONFLICT" clause) INSERT raises "duplicate key
> value violates unique constraint" and doesn't run to
> "ExecCheckHeapTupleVisible" check.
> The "ExecInsert" handles constraint checks but not later checks like
> ExecCheckHeapTupleVisible.

The test in ExecCheckHeapTupleVisible() seems wrong to me.  It's
not immediately obvious what the proper fix is.  Peter, do you have
any ideas on this?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 10:06 AM, Kevin Grittner <kgrittn@gmail.com> wrote:

> The test in ExecCheckHeapTupleVisible() seems wrong to me.  It's
> not immediately obvious what the proper fix is.

To identify what cases ExecCheckHeapTupleVisible() was meant to
cover I commented out the body of the function to see which
regression tests failed.  None did.  The failures shown on this
thread are fixed by doing so.  If there is really a need for this
function, speak up now and provide a test case showing what is
broken without it; otherwise if I can't find some justification for
this function I will rip it (and the calls to it) out of the code.
If you do have some test case showing what breaks without the
function, let's get it added to the regression tests!

I'm currently running `make check-world` with TAP tests enabled,
just in case there is some test there which demonstrates the need
for this.  It seems unlikely that such a test would be under the
TAP tests, but I'm making sure...

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Wed, Oct 12, 2016 at 12:45 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> The test in ExecCheckHeapTupleVisible() seems wrong to me.  It's
>> not immediately obvious what the proper fix is.

(prune old e-mail address from cc list)

I agree that the multi-value case is a bug.

> To identify what cases ExecCheckHeapTupleVisible() was meant to
> cover I commented out the body of the function to see which
> regression tests failed.  None did.  The failures shown on this
> thread are fixed by doing so.  If there is really a need for this
> function, speak up now and provide a test case showing what is
> broken without it; otherwise if I can't find some justification for
> this function I will rip it (and the calls to it) out of the code.
> If you do have some test case showing what breaks without the
> function, let's get it added to the regression tests!

I am independently working on a bug to fix to
ExecCheckHeapTupleVisible(), concerning the fact that
HeapTupleSatisfiesMVCC() can be called without even a shared buffer
lock held (only a buffer pin) [1]. So, anything that we do here should
probably take care of that, while we're at it.

I think that it should be pretty obvious to you why the check exists
at all, Kevin. It exists because it would be improper to decide to
take the DO NOTHING path on the basis of some other committed tuple
existing that is not visible to the original MVCC snapshot, at higher
isolation levels. There is a similar consideration for DO UPDATE. I'm
slightly surprised that you're contemplating just ripping the check
out. Did I miss something?

[1] https://www.postgresql.org/message-id/57EE93C8.8080504@postgrespro.ru
--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Thomas Munro
Дата:
On Thu, Oct 13, 2016 at 8:45 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
> On Wed, Oct 12, 2016 at 10:06 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
>
>> The test in ExecCheckHeapTupleVisible() seems wrong to me.  It's
>> not immediately obvious what the proper fix is.
>
> To identify what cases ExecCheckHeapTupleVisible() was meant to
> cover I commented out the body of the function to see which
> regression tests failed.  None did.  The failures shown on this
> thread are fixed by doing so.  If there is really a need for this
> function, speak up now and provide a test case showing what is
> broken without it; otherwise if I can't find some justification for
> this function I will rip it (and the calls to it) out of the code.
> If you do have some test case showing what breaks without the
> function, let's get it added to the regression tests!

Here's an isolation test that shows the distinction between a
transaction that reports a serialization failure because it crashed
into its own invisible tuples, and one that reports a serialization
failure because it crashed into a concurrent transaction's invisible
tuples.  Surely Peter intended the latter to report an error, but the
former seems like an oversight.

Here's a patch that shows one way to fix it.  I think it does make
sense to change this, because otherwise automatic
retry-on-serialization-failure strategies will be befuddle by this
doomed transaction.  And as you and Vitaly have said, there is
literally no concurrent update.

--
Thomas Munro
http://www.enterprisedb.com

Вложения

Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Wed, Oct 12, 2016 at 1:05 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> Here's a patch that shows one way to fix it.  I think it does make
> sense to change this, because otherwise automatic
> retry-on-serialization-failure strategies will be befuddle by this
> doomed transaction.  And as you and Vitaly have said, there is
> literally no concurrent update.

I think that you have the right idea, but we still need to fix that
buffer lock bug I mentioned...


--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 3:07 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Wed, Oct 12, 2016 at 1:05 PM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>> Here's a patch that shows one way to fix it.  I think it does make
>> sense to change this, because otherwise automatic
>> retry-on-serialization-failure strategies will be befuddle by this
>> doomed transaction.  And as you and Vitaly have said, there is
>> literally no concurrent update.
>
> I think that you have the right idea, but we still need to fix that
> buffer lock bug I mentioned...

Aren't these two completely separate and independent bugs?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Wed, Oct 12, 2016 at 1:41 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
> Aren't these two completely separate and independent bugs?

Technically they are, but they are both isolated to the same small
function. Surely it's better to fix them both at once?

--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 3:02 PM, Peter Geoghegan <pg@bowt.ie> wrote:

> I agree that the multi-value case is a bug.

> I think that it should be pretty obvious to you why the check exists
> at all, Kevin. It exists because it would be improper to decide to
> take the DO NOTHING path on the basis of some other committed tuple
> existing that is not visible to the original MVCC snapshot, at higher
> isolation levels.

That's only true if it causes a cycle in the apparent order of
execution.  If we rip out the check what we have is behavior
completely consistent with the other transaction executing first;
in other words, it creates a read-write dependency (a/k/a
rw-conflict) from the current transaction to the concurrent
transaction which succeeds in its insert.  That may or may not
cause a cycle, depending on what else is happening.

Now, generating a write conflict serialization failure will prevent
serialization anomalies, but unless I'm missing something it is a
cruder test than is required, and will generate false positives in
simple cases like Thomas created.  What I think would be more
appropriate is to record the dependency and test for a
serialization failure.

> There is a similar consideration for DO UPDATE.

But in DO UPDATE the current transaction is writing something that
the other transaction attempted to read, so that *does* create
write skew and its attendant anomalies.  There I think we *do* need
to throw a serialization failure error.

> I'm slightly surprised that you're contemplating just ripping the check out.

The lack of any regression tests to demonstrate the failure the
code is preventing puts the burden on others to figure it out fresh
each time, which had me a little miffed.  Note that you are
mis-quoting me a bit -- I said "if I can't find some justification
for this function I will rip it (and the calls to it) out of the
code."  I was still looking.  I don't think the write conflict
justifies it, but the rw-conflict and attendant risk of a cycle in
apparent order of execution does.

If the "proper" fix is impossible (or just too freaking ugly) we
might fall back on the fix Thomas suggested, but I would like to
take advantage of the "special properties" of the INSERT/ON
CONFLICT DO NOTHING code to avoid false positives where we can.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 3:55 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Wed, Oct 12, 2016 at 1:41 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> Aren't these two completely separate and independent bugs?
>
> Technically they are, but they are both isolated to the same small
> function. Surely it's better to fix them both at once?

If the code is hopelessly intertwined, maybe.  Generally it is
better to fix two independent bugs with two independent patches.
It documents things better and make each fix easier to understand.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Wed, Oct 12, 2016 at 2:06 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
> If the "proper" fix is impossible (or just too freaking ugly) we
> might fall back on the fix Thomas suggested, but I would like to
> take advantage of the "special properties" of the INSERT/ON
> CONFLICT DO NOTHING code to avoid false positives where we can.

Do you intend to propose a patch to do that?


--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Thomas Munro
Дата:
On Thu, Oct 13, 2016 at 10:06 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
> On Wed, Oct 12, 2016 at 3:02 PM, Peter Geoghegan <pg@bowt.ie> wrote:
>
>> I agree that the multi-value case is a bug.
>
>> I think that it should be pretty obvious to you why the check exists
>> at all, Kevin. It exists because it would be improper to decide to
>> take the DO NOTHING path on the basis of some other committed tuple
>> existing that is not visible to the original MVCC snapshot, at higher
>> isolation levels.
>
> That's only true if it causes a cycle in the apparent order of
> execution.  If we rip out the check what we have is behavior
> completely consistent with the other transaction executing first;
> in other words, it creates a read-write dependency (a/k/a
> rw-conflict) from the current transaction to the concurrent
> transaction which succeeds in its insert.  That may or may not
> cause a cycle, depending on what else is happening.

The "higher isolation levels" probably shouldn't be treated the same way.

I think Peter's right about REPEATABLE READ.  We should definitely
raise the error immediately as we do in that level, because our RR
(SI) doesn't care about write skew and all that stuff, it just
promises that you can only see data in your snapshot.  We can't allow
you to take a different course of action based on data that your
snapshot can't see, so the only reasonable thing to do is abandon
ship.

But yeah, the existing code raises false positive serialization
failures under SERIALIZABLE, and that's visible in the isolation test
I posted: there is actually a serial order of those transactions with
the same result.

When working on commit fcff8a57 I became suspicious of the way ON
CONFLICT interacts with SSI, as I mentioned in passing back then[1],
thinking mainly of false negatives.  I failed to find a
non-serializable schedule involving ON CONFLICT that was allowed to
run, though I didn't spend much time on it.   One thing that worries
me is the final permutation of read-write-unique-4.spec, which
produces an arguably spurious UCV, that is, a transaction that doesn't
commit but raises a UCV instead of the serialization failure you might
expect.  The ON CONFLICT equivalent might be a transaction that takes
the ON CONFLICT path and then commits, even though it should be
considered non-serializable.  I would really like to understand that
case better, and until then I wouldn't bet my boots that it isn't
possible to commit anomalies using ON CONFLICT under SERIALIZABLE
without Peter's check (or even with it), despite the fact that it
reaches predicate locking code via heap_fetch etc.

[1] https://www.postgresql.org/message-id/CAEepm%3D2kYCegxp9qMR5TM1X3oXHj16iYzLPj_go52R2R07EvnA%40mail.gmail.com

--
Thomas Munro
http://www.enterprisedb.com


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Wed, Oct 12, 2016 at 6:06 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> But yeah, the existing code raises false positive serialization
> failures under SERIALIZABLE, and that's visible in the isolation test
> I posted: there is actually a serial order of those transactions with
> the same result.

I was under the impression that false positives of this kind are
allowed by SSI. Why focus on this false positive scenario in
particular?

--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Thomas Munro
Дата:
On Thu, Oct 13, 2016 at 2:32 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Wed, Oct 12, 2016 at 6:06 PM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
>
> I was under the impression that false positives of this kind are
> allowed by SSI. Why focus on this false positive scenario in
> particular?

Sure, they're allowed.  Of course the ones caused by your own command
are questionable because there is no concurrent transaction and
retrying the transaction will never succeed, as discussed, but it
seems we all agree on that.  The question is just whether INSERT ...
ON CONFLICT should generate more false positives than plain old
INSERT.  Two overlapping conflicting plain old INSERTs without any
other entanglement to create a cycle will result in one succeeding and
the other getting a UCV, as if one ran after the other with no
overlap.  It would be nice if the ON CONFLICT case used the same
smarts to take the ON CONFLICT path, unless there is some theoretical
problem I'm overlooking.  Otherwise concurrency is reduced.

I wonder if we should fix the same-command problem reported by the OP,
and then study larger questions of ON CONFLICT/SERIALIZABLE
interaction as a separate project.  I may be imagining problems where
there are none...

--
Thomas Munro
http://www.enterprisedb.com


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 5:21 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Wed, Oct 12, 2016 at 2:06 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> If the "proper" fix is impossible (or just too freaking ugly) we
>> might fall back on the fix Thomas suggested, but I would like to
>> take advantage of the "special properties" of the INSERT/ON
>> CONFLICT DO NOTHING code to avoid false positives where we can.
>
> Do you intend to propose a patch to do that?

Yes, I'm working on that.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 8:06 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> On Thu, Oct 13, 2016 at 10:06 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> On Wed, Oct 12, 2016 at 3:02 PM, Peter Geoghegan <pg@bowt.ie> wrote:
>>
>>> I agree that the multi-value case is a bug.
>>
>>> I think that it should be pretty obvious to you why the check exists
>>> at all, Kevin. It exists because it would be improper to decide to
>>> take the DO NOTHING path on the basis of some other committed tuple
>>> existing that is not visible to the original MVCC snapshot, at higher
>>> isolation levels.
>>
>> That's only true if it causes a cycle in the apparent order of
>> execution.  If we rip out the check what we have is behavior
>> completely consistent with the other transaction executing first;
>> in other words, it creates a read-write dependency (a/k/a
>> rw-conflict) from the current transaction to the concurrent
>> transaction which succeeds in its insert.  That may or may not
>> cause a cycle, depending on what else is happening.
>
> The "higher isolation levels" probably shouldn't be treated the same way.
>
> I think Peter's right about REPEATABLE READ.  We should definitely
> raise the error immediately as we do in that level, because our RR
> (SI) doesn't care about write skew and all that stuff, it just
> promises that you can only see data in your snapshot.

But the whole point of the special code for both RI and INSERT/ON
CONFLICT is to get "underneath" that and provide a "primitive" that
can see things an application statement can't, for better
performance and error handling.  What SERIALIZABLE promises is that
it runs exactly the same as REPEATABLE READ, but with some
additional monitoring for serialization failure errors in some
places that REPEATABLE READ does not generate them -- this would be
the first and only place that SERIALIZABLE would break that model.
The idea seems completely wrong and arbitrary.

Where do you see a problem if REPEATABLE READ handles INSERT/ON
CONFLICT without error?  In many cases it would actually be
providing a result consistent with a serial execution of the
transactions; and where it doesn't, it would be the same anomalies
that are possible with anything else under REPEATABLE READ.

> We can't allow you to take a different course of action based on
> data that your snapshot can't see,

But that is exactly what INSERT/ON CONFLICT always does!  That is
the only way to avoid the so-called "unprincipled deadlocks" the
feature aims to avoid.

> so the only reasonable thing to do is abandon ship.

I disagree.

> But yeah, the existing code raises false positive serialization
> failures under SERIALIZABLE, and that's visible in the isolation test
> I posted: there is actually a serial order of those transactions with
> the same result.

Exactly.  The error based on the write conflict with ON CONFLICT DO
NOTHING in your patch is really a false positive.  That doesn't
break correctness, but it hurts performance, so it should be
avoided if possible.

> When working on commit fcff8a57 I became suspicious of the way ON
> CONFLICT interacts with SSI, as I mentioned in passing back then[1],
> thinking mainly of false negatives.  I failed to find a
> non-serializable schedule involving ON CONFLICT that was allowed to
> run, though I didn't spend much time on it.   One thing that worries
> me is the final permutation of read-write-unique-4.spec, which
> produces an arguably spurious UCV, that is, a transaction that doesn't
> commit but raises a UCV instead of the serialization failure you might
> expect.  The ON CONFLICT equivalent might be a transaction that takes
> the ON CONFLICT path and then commits, even though it should be
> considered non-serializable.  I would really like to understand that
> case better, and until then I wouldn't bet my boots that it isn't
> possible to commit anomalies using ON CONFLICT under SERIALIZABLE
> without Peter's check (or even with it), despite the fact that it
> reaches predicate locking code via heap_fetch etc.

Hm.  With the duplicate key error I fail to see how any anomaly
could make it to a committed state in the database, although I
agree it is unfortunate that there is that one case where it really
should be considered a serialization failure that we haven't yet
coerced to yield that instead of the duplicate key error.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 12, 2016 at 8:32 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Wed, Oct 12, 2016 at 6:06 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
>
> I was under the impression that false positives of this kind are
> allowed by SSI. Why focus on this false positive scenario in
> particular?

Every situation that generates a false positive hurts performance;
we went to great lengths to minimize those cases.  In addition, we
made sure that at the point that a serialization failure is
returned, that retrying the transaction from the start could not
fail on the same combination of transactions, by ensuring that at
least one transaction in the set had successfully committed, and
that it was a transaction which had done writes.  To generate a
serialization failure on a single transaction has to be considered
a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
many frameworks designed to work with serializable transactions.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Thu, Oct 13, 2016 at 6:19 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> I was under the impression that false positives of this kind are
>> allowed by SSI. Why focus on this false positive scenario in
>> particular?
>
> Every situation that generates a false positive hurts performance;
> we went to great lengths to minimize those cases.  In addition, we
> made sure that at the point that a serialization failure is
> returned, that retrying the transaction from the start could not
> fail on the same combination of transactions, by ensuring that at
> least one transaction in the set had successfully committed, and
> that it was a transaction which had done writes.  To generate a
> serialization failure on a single transaction has to be considered
> a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
> many frameworks designed to work with serializable transactions.

It sounds like you're talking about the original complaint about a
multi-value INSERT. It took me a minute to decide that that's probably
what you meant, because everyone already agrees that that isn't okay
-- you don't need to convince me.

We must still determine if a fix along the lines of the one proposed
by Thomas is basically acceptable (that is, that it does not clearly
break any documented guarantees, even if it is overly strict).
Separately, I'd be interested in seeing how specifically we could do
better with the patch that you have in the works for this.

In general, I see value in reducing false positives, but I don't
understand why your concern here isn't just about preferring to keep
them to a minimum (doing our best). In other words, I don't understand
why these false positives are special, and I'm still not even clear on
whether you are actually arguing that they are special. (Except, of
course, the multi-value case -- that's clearly not okay.)

So, with the fix proposed by Thomas applied, will there be any
remaining false positives that are qualitatively different to existing
false positive cases? And, if so, how?

--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Thu, Oct 13, 2016 at 2:16 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Thu, Oct 13, 2016 at 6:19 AM, Kevin Grittner <kgrittn@gmail.com> wrote:

>> Every situation that generates a false positive hurts performance;
>> we went to great lengths to minimize those cases.

>> To generate a
>> serialization failure on a single transaction has to be considered
>> a bug, because a retry *CAN NOT SUCCEED*!  This is likely to break
>> many frameworks designed to work with serializable transactions.
>
> It sounds like you're talking about the original complaint about a
> multi-value INSERT. It took me a minute to decide that that's probably
> what you meant, because everyone already agrees that that isn't okay
> -- you don't need to convince me.

That second part, yeah -- that's about generating a serialization
failure with one transaction.  It's pretty bad if you can't get a
set that contains one transaction to behave as though the
transactions in that set were run one at a time.  ;-)

> We must still determine if a fix along the lines of the one proposed
> by Thomas is basically acceptable (that is, that it does not clearly
> break any documented guarantees, even if it is overly strict).
> Separately, I'd be interested in seeing how specifically we could do
> better with the patch that you have in the works for this.

Basically, rather than just failing, I think we should call
CheckForSerializableConflictOut() (which determines whether the
tuple we are reading causes a rw-conflict between our current
transaction and the transaction which last wrote that tuple) and
PredicateLockTuple() (which tells later updates or deletes that
we've read the tuple).

> In general, I see value in reducing false positives, but I don't
> understand why your concern here isn't just about preferring to keep
> them to a minimum (doing our best).

That's exactly what I want to do, rather that what is the easiest
and first thing to come to mind.

> In other words, I don't understand
> why these false positives are special, and I'm still not even clear on
> whether you are actually arguing that they are special. (Except, of
> course, the multi-value case -- that's clearly not okay.)
>
> So, with the fix proposed by Thomas applied, will there be any
> remaining false positives that are qualitatively different to existing
> false positive cases? And, if so, how?

The INSERT ... ON CONFLICT DO NOTHING case does not write the
tuple, so this would be the first place we would be generating a
"write conflict" when we're not writing a tuple.  (You might argue
that "behind the scenes we write a tuple that disappears
automagically, but that's an implementation detail that might
someday change and should not be something users need to think
about a lot.)  We put a lot of effort into minimizing false
positives everywhere we could, and I'm not sure why you seem to be
arguing that we should not do so here.  If it proves impractical to
"do it right", we would not destroy logical correctness by using
the patch Thomas proposed, but we would increase the number of
transaction rollbacks and retries, which has a performance hit.

BTW, feel free to post a fix for the locking issue separately
when/if you have one.  I'm not looking at that for the moment,
since it sounded like you had already looked at it and were working
on something.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Thu, Oct 13, 2016 at 3:16 PM, Kevin Grittner <kgrittn@gmail.com> wrote:
> On Thu, Oct 13, 2016 at 2:16 PM, Peter Geoghegan <pg@bowt.ie> wrote:

>> We must still determine if a fix along the lines of the one proposed
>> by Thomas is basically acceptable (that is, that it does not clearly
>> break any documented guarantees, even if it is overly strict).
>> Separately, I'd be interested in seeing how specifically we could do
>> better with the patch that you have in the works for this.
>
> Basically, rather than just failing, I think we should call
> CheckForSerializableConflictOut() (which determines whether the
> tuple we are reading causes a rw-conflict between our current
> transaction and the transaction which last wrote that tuple) and
> PredicateLockTuple() (which tells later updates or deletes that
> we've read the tuple).

I'm wondering whether the error is appropriate on the INSERT ... ON
CONFLICT UPDATE case.  For example, with
ExecCheckHeapTupleVisible() commented out, this does not violate
the SERIALIZABLE requirements (which is that the behavior of any
set of concurrent serializable transactions which successfully
commit must be consistent with running them one at a time in some
order):

CREATE TABLE with_pk (i integer PRIMARY KEY, v text);

-- T1:
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO with_pk VALUES (4) ON CONFLICT DO NOTHING;

-- T2:
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO with_pk AS t VALUES (8)
  ON CONFLICT (i) DO UPDATE SET v = 'updated';
-- T2 blocks, waiting for T1

-- T1:
COMMIT;
-- T2 unblocks and does the "ON CONFLICT ... UPDATE"

-- T2:
COMMIT;

It seems to me that the result is consistent with T1 -> T2.  There
is no cycle in the apparent order of execution, and no error is
needed.  Can you show a case where there is a problem?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Thomas Munro
Дата:
On Fri, Oct 14, 2016 at 2:04 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
> On Wed, Oct 12, 2016 at 8:06 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
>> The "higher isolation levels" probably shouldn't be treated the same way.
>>
>> I think Peter's right about REPEATABLE READ.  We should definitely
>> raise the error immediately as we do in that level, because our RR
>> (SI) doesn't care about write skew and all that stuff, it just
>> promises that you can only see data in your snapshot.
>
> But the whole point of the special code for both RI and INSERT/ON
> CONFLICT is to get "underneath" that and provide a "primitive" that
> can see things an application statement can't, for better
> performance and error handling.  What SERIALIZABLE promises is that
> it runs exactly the same as REPEATABLE READ, but with some
> additional monitoring for serialization failure errors in some
> places that REPEATABLE READ does not generate them -- this would be
> the first and only place that SERIALIZABLE would break that model.
> The idea seems completely wrong and arbitrary.

Ugh, yeah.  Thanks for this reminder of the relationship between SI
and SSI, which I somehow temporarily lost sight of.

> Where do you see a problem if REPEATABLE READ handles INSERT/ON
> CONFLICT without error?  In many cases it would actually be
> providing a result consistent with a serial execution of the
> transactions; and where it doesn't, it would be the same anomalies
> that are possible with anything else under REPEATABLE READ.

I thought that there was something fishy about the idea of not running
Peter's check in the case of ON CONFLICT DO NOTHING in RR, because
then there isn't an opportunity to detect serialization failure that
the DO UPDATE variant has.  Upon reflection, DO NOTHING is not very
different from INSERT with an exception handler for unique_violation
that does nothing, and that doesn't cause RR to raise an error.  I see
now that you are right, and the check is probably bogus for RR.

>> But yeah, the existing code raises false positive serialization
>> failures under SERIALIZABLE, and that's visible in the isolation test
>> I posted: there is actually a serial order of those transactions with
>> the same result.
>
> Exactly.  The error based on the write conflict with ON CONFLICT DO
> NOTHING in your patch is really a false positive.  That doesn't
> break correctness, but it hurts performance, so it should be
> avoided if possible.

Agreed.  The check is bogus for SERIALIZABLE too, if we have proper SSI checks.

>> When working on commit fcff8a57 I became suspicious of the way ON
>> CONFLICT interacts with SSI, as I mentioned in passing back then[1],
>> thinking mainly of false negatives.  I failed to find a
>> non-serializable schedule involving ON CONFLICT that was allowed to
>> run, though I didn't spend much time on it.   One thing that worries
>> me is the final permutation of read-write-unique-4.spec, which
>> produces an arguably spurious UCV, that is, a transaction that doesn't
>> commit but raises a UCV instead of the serialization failure you might
>> expect.  The ON CONFLICT equivalent might be a transaction that takes
>> the ON CONFLICT path and then commits, even though it should be
>> considered non-serializable.  I would really like to understand that
>> case better, and until then I wouldn't bet my boots that it isn't
>> possible to commit anomalies using ON CONFLICT under SERIALIZABLE
>> without Peter's check (or even with it), despite the fact that it
>> reaches predicate locking code via heap_fetch etc.
>
> Hm.  With the duplicate key error I fail to see how any anomaly
> could make it to a committed state in the database, although I
> agree it is unfortunate that there is that one case where it really
> should be considered a serialization failure that we haven't yet
> coerced to yield that instead of the duplicate key error.

Right, in the unique_violation case it can't commit so there's no
problem (it would just be nicer to users if we could catch that case;
you might call it a false negative but it is harmless because a
unique_violation saves the day).  What I'm wondering about though is
whether a similar ON CONFLICT schedule suffers a similar problem, but
would allow you to commit.  For example, I think the ON CONFLICT
equivalent might be something like the following (rather contrived)
schedule, which happily commits if you comment out Peter's check:

(1)  postgres=# create table bank_account (id int primary key, cash int);
(1)  CREATE TABLE
(1)  postgres=# begin transaction isolation level serializable ;
(1)  BEGIN

(2)      postgres=# begin transaction isolation level serializable ;
(2)      BEGIN

(1)  postgres=# select * from bank_account where id = 1;
(1)  ┌────┬──────┐
(1)  │ id │ cash │
(1)  ├────┼──────┤
(1)  └────┴──────┘
(1)  (0 rows)

(2)      postgres=# insert into bank_account values (1, 100);
(2)      INSERT 0 1

(1)  postgres=# insert into bank_account values (1, 200) on conflict do nothing;
(1)  ...waits for tx2...

(2)      postgres=# commit;
(2)      COMMIT

(1)  INSERT 0 0
(1)  postgres=# commit;
(1)  COMMIT

If tx1 ran before tx2, then it would have succeeded in inserting (1,
200), and tx2 would have failed with unique_violation.  If tx2 ran
before tx1, then tx1's SELECT command would have seen (1, 100) and
possibly taken a different course of action.  So this schedule is
non-serializable, right?

If you remove ON CONFLICT DO NOTHING, then tx1 gets a unique_violation
after tx2 commits, which is similar to the last case in
read-write-unique-4.spec.  To be able to produce a cycle that SSI can
detect, perhaps an INSERT containing an implicit uniqueness check
would need to be modelled as a read followed by a write.  I couldn't
make that work, but I'm not sure if it's sensible anyway: wouldn't
overlapping transactions consisting of just a single INSERT with the
same key then produce a false positive, instead of unique_violation in
one transaction?

--
Thomas Munro
http://www.enterprisedb.com


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Thu, Oct 13, 2016 at 5:26 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> On Fri, Oct 14, 2016 at 2:04 AM, Kevin Grittner <kgrittn@gmail.com> wrote:

>> Where do you see a problem if REPEATABLE READ handles INSERT/ON
>> CONFLICT without error?

> I think the ON CONFLICT
> equivalent might be something like the following (rather contrived)
> schedule, which happily commits if you comment out Peter's check:
>
> (1)  postgres=# create table bank_account (id int primary key, cash int);
> (1)  CREATE TABLE
> (1)  postgres=# begin transaction isolation level serializable ;
> (1)  BEGIN
>
> (2)      postgres=# begin transaction isolation level serializable ;
> (2)      BEGIN
>
> (1)  postgres=# select * from bank_account where id = 1;
> (1)  ┌────┬──────┐
> (1)  │ id │ cash │
> (1)  ├────┼──────┤
> (1)  └────┴──────┘
> (1)  (0 rows)
>
> (2)      postgres=# insert into bank_account values (1, 100);
> (2)      INSERT 0 1
>
> (1)  postgres=# insert into bank_account values (1, 200) on conflict do nothing;
> (1)  ...waits for tx2...
>
> (2)      postgres=# commit;
> (2)      COMMIT
>
> (1)  INSERT 0 0
> (1)  postgres=# commit;
> (1)  COMMIT
>
> If tx1 ran before tx2, then it would have succeeded in inserting (1,
> 200), and tx2 would have failed with unique_violation.  If tx2 ran
> before tx1, then tx1's SELECT command would have seen (1, 100) and
> possibly taken a different course of action.  So this schedule is
> non-serializable, right?

Right.  This is a case that needs something done if we take out the
rather overzealous check that is there now.  Thanks for finding an
example.  The trick now is to generalize to find the boundaries of
what is a problem and what isn't, so we can know what we are aiming
for as an "ideal" solution, and compare possible solutions for how
close they come.

> If you remove ON CONFLICT DO NOTHING, then tx1 gets a unique_violation
> after tx2 commits, which is similar to the last case in
> read-write-unique-4.spec.  To be able to produce a cycle that SSI can
> detect, perhaps an INSERT containing an implicit uniqueness check
> would need to be modelled as a read followed by a write.  I couldn't
> make that work, but I'm not sure if it's sensible anyway: wouldn't
> overlapping transactions consisting of just a single INSERT with the
> same key then produce a false positive, instead of unique_violation in
> one transaction?

If two transactions simultaneously attempted an INSERT of the same
key, one would block (as it would now) and if the other
successfully committed the blocked transaction would then get a
serialization failure error.  If the transactions did not overlap
you would get a duplicate key error.  That would arguably be nicer
behavior than we have now.  I think that if, within a serializable
transaction, we internally add a predicate lock for each page as we
descend to the point of insertion on a unique index, we might get
exactly that behavior.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Tom Lane
Дата:
I've pushed a simplified (no refactoring) version of the fix proposed
by Thomas and Peter, so that we have some kind of fix in place for
tomorrow's releases.  I think further improvement along the lines
suggested by Kevin can wait till later.

I noticed that the ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE cases
are now not terribly consistent about what to do with self-conflicting
insertions.  The examples memorialized in commit a6c0a5b6e are

insert into selfconflict values (1,1), (1,2) on conflict do nothing;
-- succeeds, inserting the first row and ignoring the second

insert into selfconflict values (4,1), (4,2) on conflict(f1) do update set f2 = 0;
ERROR:  ON CONFLICT DO UPDATE command cannot affect row a second time
HINT:  Ensure that no rows proposed for insertion within the same command have duplicate constrained values.

I think that if we believe the first behavior is correct, then the second
behavior is a bug; what that command should do is to insert a single row
containing (4,0).  However, that behavior is the same across all isolation
levels, and it was like that before today's patches, so I didn't undertake
to do anything about it right now.

            regards, tom lane


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Thu, Oct 13, 2016 at 5:26 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:

> (1)  postgres=# create table bank_account (id int primary key, cash int);
> (1)  CREATE TABLE
> (1)  postgres=# begin transaction isolation level serializable ;
> (1)  BEGIN
>
> (2)      postgres=# begin transaction isolation level serializable ;
> (2)      BEGIN
>
> (1)  postgres=# select * from bank_account where id = 1;
> (1)  ┌────┬──────┐
> (1)  │ id │ cash │
> (1)  ├────┼──────┤
> (1)  └────┴──────┘
> (1)  (0 rows)
>
> (2)      postgres=# insert into bank_account values (1, 100);
> (2)      INSERT 0 1
>
> (1)  postgres=# insert into bank_account values (1, 200) on conflict do nothing;
> (1)  ...waits for tx2...
>
> (2)      postgres=# commit;
> (2)      COMMIT
>
> (1)  INSERT 0 0
> (1)  postgres=# commit;
> (1)  COMMIT

This is a really diabolical example.  (Thanks for that!)

Without use of the ON CONFLICT option, using standard statements in
SERIALIZABLE transactions there would be no serialization anomaly.
If both transactions first tested for the existence of the row with
a SELECT statement tx1 would get a serialization failure; in the
example as shown tx1 would get a duplicate key error (even though
we would like to get to the point of having it get a serialization
failure).

If we took out the existing check and modified INSERT under
SERIALIZABLE transactions to acquire predicate locks during initial
insertion of the index key in the index used by the ON CONFLICT
clause (like the ones which would be acquired by a SELECT which
used the index), we would handle many cases, but not this one
(since the INSERT in tx2 does not use the ON CONFLICT clause).
This example would cause a rw-conflict from tx1 to tx2, but not
vice versa, since tx2 doesn't read.  So, no "dangerous structure"
in the SSI sense, and no serialization failure, even though the
results are not consistent with any serial execution of the
transactions.

I *think* that to generate the correct rw-conflicts to allow
dropping the existing check (recently proposed by Thomas Munro and
implemented by Tom Lane), we would need to acquire predicate locks
during the descent to insert into each unique index during any
INSERT under SERIALIZABLE transaction isolation.  The obvious
question is whether the overhead of doing that would be made up by
the work saved through reduced false positive serialization
failures.  It does not seem obvious which would win on overall
performance; in fact it seems very likely that it would depend on
the workload.

My initial thought is that since reducing the false positive rate
would only help when there was a high rate of conflicts under the
existing patch, and it would add code complexity and cost for the
case where conflict rate is low, that we might want to just leave
the current fix and see whether there are complaints from the field
about the false positive rate.

Reducing the rate of false positive serialization failures is a
worthy goal, but it's gotta make sense from a cost/benefit
perspective.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Peter Geoghegan
Дата:
On Mon, Oct 24, 2016 at 8:07 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
> My initial thought is that since reducing the false positive rate
> would only help when there was a high rate of conflicts under the
> existing patch, and it would add code complexity and cost for the
> case where conflict rate is low, that we might want to just leave
> the current fix and see whether there are complaints from the field
> about the false positive rate.
>
> Reducing the rate of false positive serialization failures is a
> worthy goal, but it's gotta make sense from a cost/benefit
> perspective.

What are your thoughts on the back-and-forth between myself and Tom
concerning predicate locks within heap_fetch_tuple() path last
weekend? I now think that there might be an outstanding concern about
ON CONFLICT DO NOTHING + SSI here.


--
Peter Geoghegan


Re: SERIALIZABLE and INSERTs with multiple VALUES

От
Kevin Grittner
Дата:
On Wed, Oct 26, 2016 at 3:20 PM, Peter Geoghegan <pg@bowt.ie> wrote:
> On Mon, Oct 24, 2016 at 8:07 AM, Kevin Grittner <kgrittn@gmail.com> wrote:
>> My initial thought is that since reducing the false positive rate
>> would only help when there was a high rate of conflicts under the
>> existing patch, and it would add code complexity and cost for the
>> case where conflict rate is low, that we might want to just leave
>> the current fix and see whether there are complaints from the field
>> about the false positive rate.
>>
>> Reducing the rate of false positive serialization failures is a
>> worthy goal, but it's gotta make sense from a cost/benefit
>> perspective.
>
> What are your thoughts on the back-and-forth between myself and Tom
> concerning predicate locks within heap_fetch_tuple() path last
> weekend? I now think that there might be an outstanding concern about
> ON CONFLICT DO NOTHING + SSI here.

As far as I can see, Tom's fixes are all spot-on.

It *might* be possible to call some special custom variant of
heap_fetch() instead of the usual one here, but it would be very
hard to prove that it was safe against introducing a bug in the
serialization logic in all cases, and seems to me that it would be
fragile against future changes.  What's more, if we do decide that
the false positive rate with the existing fix is too high, we would
need to use the standard heap_fetch() function in implementing the
more targeted logic.

By the way, if we do get complaints from the field about
performance problems from the false positives in this area, they
would need to show that the hit was pretty severe to justify
back-patching to any stable branches.  We are talking about a
performance tuning change, and one that is not necessarily a winner
in all cases; such tuning is rarely back-patched to stable
branches.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company