Re: RFC: Making TRUNCATE more "MVCC-safe"

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: RFC: Making TRUNCATE more "MVCC-safe"
Дата
Msg-id 4F38DC800200002500045439@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: RFC: Making TRUNCATE more "MVCC-safe"  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: RFC: Making TRUNCATE more "MVCC-safe"  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Robert Haas <robertmhaas@gmail.com> wrote:
> The example that I remember was related to SELECT FOR
> UPDATE/SELECT FOR SHARE.  The idea of those statements is that you
> want to prevent the row from being updated or deleted until some
> other concurrent action is complete; for example, in the case of a
> foreign key, we'd like to prevent the referenced row from being
> deleted or updated in the relevant columns until the inserting
> transaction is committed.  But it doesn't work, because when the
> updating or deleting process gets done with the lock wait, they
> are still using the same snapshot as before, and merrily do
> exactly the the thing that the lock-wait was supposed to prevent.
This issue is one which appears to be a problem for people trying to
migrate from Oracle, where a write conflict would be generated.
> If an actual UPDATE is used, it's safe (I think): anyone who was
> going to UPDATE or DELETE the row will fail with some kind of
> serialization error.
Right; a write conflict.
> But a SELECT FOR UPDATE that commits is treated more like an
> UPDATE that rolls back: it's as if the lock never existed. 
> Someone (Florian?) proposed a patch to change this, but it seemed
> problematic for reasons I no longer exactly remember.
It had to do with only having one xmax and how that worked with
subtransactions.
Of course, besides the technical obstacles, such a semantic change
could break existing code for PostgreSQL users.  :-(
> When using an actual foreign key, we work around this by taking a
> new snapshot to cross-check that things haven't changed under us,
> but user-level code can't do that.  At READ COMMITTED, depending
> on the situation, either the fact that we take new snapshots
> pretty frequently or the EPQ machinery sometimes make things work
> sensibly anyway, and at SERIALIZABLE, SSI prevents these kinds of
> anomalies.  But REPEATABLE READ has no protection.
Well, personally I have a hard time calling READ COMMITTED behavior
sensible.  Consider this:
-- connection 1
test=# create table t (id int not null primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"t_pkey" for table "t"
CREATE TABLE
test=# insert into t select generate_series(1, 10);
INSERT 0 10
-- connection 2
test=# begin;
BEGIN
test=# update t set id = id - 1;
UPDATE 10
-- connection 1
test=# select * from t where id = (select min(id) from t) for
update;
[blocks]
-- connection 2
test=# commit;
COMMIT
-- connection 1
[unblocks]id 
----
(0 rows)
-Kevin


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Vik Reykja
Дата:
Сообщение: Re: Optimize referential integrity checks (todo item)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: When do we lose column names?