Re: Postgresql SSI: read/write dependencies

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Postgresql SSI: read/write dependencies
Дата
Msg-id 1695737606.1143170.1446071160691.JavaMail.yahoo@mail.yahoo.com
обсуждение исходный текст
Ответ на Postgresql SSI: read/write dependencies  (Pavel Suderevsky <psuderevsky@gmail.com>)
Список pgsql-general
On Wednesday, October 28, 2015 4:33 PM, Pavel Suderevsky <psuderevsky@gmail.com> wrote:

> I am working on better insight of postgresql serialization
> mechanism.  The main question is - how to determine serialization
> behaviour at system level and predict exception:
> ERROR:  could not serialize access due to read/write dependencies among transactions
> DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
> HINT:  The transaction might succeed if retried.

You may be over-thinking this.  I'll get into some detail below and
point you to some documents, but the idea of using serializable
transactions instead of micro-managing race conditions with LOCK
statements and SELECT FOR UPDATE, etc. is the simplicity of it.
You make sure that you handle any serialization failure by ignoring
any data read in that transaction and starting the transaction over
from the beginning.  Done.  You don't need to know more than that
to make it work.  The docs have suggestions for improving
performance when using this technique, and that would be the next
level to look at:

http://www.postgresql.org/docs/9.4/interactive/transaction-iso.html#XACT-SERIALIZABLE

That said, if you have an interest in learning more, the PostgreSQL
project has a Wiki page that was used during development and a
README in with the source code to explain the technology to those
working on the PostgreSQL engine itself:

https://wiki.postgresql.org/wiki/Serializable

http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/storage/lmgr/README-SSI;hb=master

The README was initially based on the Wiki page, so those two have
a lot of overlap.

There is also a paper on how it all works (and how it was
developed) which was presented at The 38th International Conference
on Very Large Data Bases, August 27th - 31st 2012, Istanbul,
Turkey.  It gets a bit more technical, if you want that:

http://vldb.org/pvldb/vol5/p1850_danrkports_vldb2012.pdf

> I was doing some tests with simple select/update/insert
> operations and I have the following assumtion. One of two
> concurrent transactions will get failed with mentioned exception
> after the first commit if:
> 1. Both transactions have acquired SIReadLock and RowExclusiveLock.
> 2. Both transactions have cross on the the same rows in their
> snapshots and xid of any of these rows has been changed in result
> of first commit.
>
> Whether these two statements are correct?

Neither, really.

> What else aspects should be mentioned when investigating this?

Look at the above-referenced documents.  It takes some careful
study and a fair amount of time to really grasp what's happening
*internally*; but you no more need to understand all that to *use*
it than you need to understand the physics of all the parts in a
car engine to turn the key, choose a gear, and step on the pedals
to make it go.

> How these dependencies can be found?

There are many different strings that can be returned in the DETAIL
of the message you cite above.  Where there is useful information
about the dependencies available at the point in the code where the
problem is found, it will be included there.  Generally, though,
you don't get more information than what position (relative to the
pivot) the canceled transaction held in the "dangerous structure"
that caused the error.

Also note that there can be false positives, primarily due to the
granularity of the predicate locks and other summarization of the
relevant data.  This is necessary to get the SSI algorithm to work
in finite RAM and complete its work in finite time.

> Also I have two extra questions related to this case after reading
> and trying https://wiki.postgresql.org/wiki/SSI.

Fair enough.  I'm glad you've been looking through that.

> 1. Primary Colors - "the predicate locking used as part of conflict
> detection works based on pages and tuples actually accessed, and
> there is a random factor used in inserting index entries which have
> equal keys, in order to minimize contention". Really random? There
> are no way to determine possible fail at system level?

When there are a lot of duplicate values in a btree index, and the
btree logic is picking a place to insert a new tuple, each time it
finds a candidate page that is full, it "rolls the dice" to decide
whether to look "to the right" for some page that has room or to
split the current page.  This has proven to be a useful
optimization so that a large number of btree tuple insertions with
keys having low cardinality doesn't degrade to O(N^2) performance.

> 2. Deposit Report (not even a question but perplexity) - it is
> strange that T2 doesn't gurantee that no new receipts will be added
> with obsolete bench. Well, yes, operations are serialized actually,
> but it turns out that if I want no new receipts to be added with
> obsolete bench I need to add extra select in T1. =)

"Traditional" implementations of serializable transactions use
strict two-phase locking (S2PL), which does give the behavior you
are describing.  S2PL also performs far worse than SSI for many
common workloads.  See the VLDB paper for details, but one
highlight from the benchmarking Dan Ports did at the MIT labs
showed that running SIBENCH with a 10000 row table size performed
almost 5x better with SSI than with S2PL (which uses the blocking
locks you have apparently come to expect).

S2PL performs so poorly that it was dropped from PostgreSQL many
years ago.  You can approximate it using LOCK statements, SELECT
... FOR UPDATE, duplicating data which you maintain via triggers,
and other tricks; but if you do that you will generally see
performance degrade below what you get from traditional RAM-based
S2PL.

I hope this is enough to get you comfortable with what's happening
within SSI.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

В списке pgsql-general по дате отправления:

Предыдущее
От: anj patnaik
Дата:
Сообщение: Re: does pg_dump get delayed if I have a loop that does continuous insertions
Следующее
От: Marc Mamin
Дата:
Сообщение: Re: regexp_replace to remove sql comments