Gokulakannan Somasundaram <gokul007@gmail.com> writes:
> Can you also explain how are we avoiding duplicates in this scenario?
> a) Say there are three pages(P,Q, R) full of duplicate tuples, that are
> deleted but not dead of id x(due to some long running transaction).
> b) Now Session A gets in and checks the duplicate tuples for their
> liveliness with the HeapTuple for id x with shared lock on all the pages P,
> Q and R. Since all are deleted, it will get the message, that it need not
> come back to check again for uniqueness Finally it again starts from P to
> check for freespace to insert its tuple. Say it inserts the tuple at page Q.
> c) Now Session B(with same id x) starts after Session A, but it passes Q
> before the insertion of the tuple by Session A. It will also get the
> response from _bt_check_unique, that it need not comeback for second time
> unique check. Now it checks for freespace from P and it finds freespace at
> P. Then it will insert the new record at P itself.
> So we have two duplicate records, eventhough there is a unique constraint.
> Is this a possible scenario?
Are you talking about exclusion constraints or btree uniqueness
constraints? This doesn't seem to be a particularly accurate
description of the implementation of either one. The way btree
deals with this is explained in _bt_doinsert:
* NOTE: obviously, _bt_check_unique can only detect keys that are already * in the index; so it cannot defend
againstconcurrent insertions of the * same key. We protect against that by means of holding a write lock on *
thetarget page. Any other would-be inserter of the same key must * acquire a write lock on the same target page, so
onlyone would-be * inserter can be making the check at one time. Furthermore, once we are * past the check we
holdwrite locks continuously until we have performed * our insertion, so no later inserter can fail to see our
insertion. * (This requires some care in _bt_insertonpg.)
regards, tom lane