Re: Am I locking more than I need to?

Поиск
Список
Период
Сортировка
От Chris Browne
Тема Re: Am I locking more than I need to?
Дата
Msg-id 60ad01hf8j.fsf@dev6.int.libertyrms.info
обсуждение исходный текст
Ответ на Am I locking more than I need to?  (Jeff Davis <jdavis-pgsql@empires.org>)
Список pgsql-general
pgsql@bluepolka.net ("Ed L.") writes:
> On Friday May 21 2004 12:50, Jeff Davis wrote:
>>
>> client1=> BEGIN;
>> -- test to see if there's already a record there. If so, UPDATE
>> --   if not, INSERT
>> client1=> SELECT * from cart_items where cart_id=X AND prod_id=Y;
>> -- no record, so INSERT
>> client1=> INSERT into cart_items(cart_id,prod_id,quantity)
>> VALUES(X,Y,1);
>> client2=> SELECT * from cart_items where cart_id=X AND prod_id=Y;
>> -- still no record, since client1 didn't commit yet
>> client1=> COMMIT;
>> -- now client2 needs to insert
>> client2=> INSERT into cart_items(cart_id,prod_id,quantity)
>> VALUES(X,Y,1);
>> client2=> COMMIT;
>> -- Oops, now there are two records in there.
>>
>> That's the condition I was worried about.
>
> Ah, I see.  I second Christopher Browne's comments on the unique
> index (I assumed you were doing that) and the ease of checking
> errors in the app.  If you don't have transactions spanning multiple
> pageviews and you don't have multiple people modifying the same
> shopping cart at the same time, it would seem this is a non-issue.
> But I guess you could try to explicitly lock the table.  I've never
> done it that way, instead preferring like C.B.  to enforce integrity
> at the schema level with the unique index and having the app handle
> return values, errors, etc.  (In DBI, you need to set a flag to have
> it allow you to handle the error vs. aborting.  RaiseError, maybe?).
> Maybe its wise to systematically handle all DB errors, but I suspect
> you'll never see this one occur.

I think it's just wishful thinking to hope that there's anything that
is _fundamentally_ a lot better than having the UNIQUE index, and
recovering from the "not unique" errors that may arise.

- If you lock the table, then that adds some _new_ error conditions
  that can occur, namely that an update may get blocked because the
  table is locked.  Coping with that requires some application code.

- The other approach one might _imagine_ would be useful would be to
  run queries that are always checking to see if the inserts look like
  they'll be unique.  Unfortunately, that CAN'T work, because multiple
  connections involve multiple transaction contexts.

I can think of three other approaches:

1.  You create a temp table for each cart, and somehow tie the cart to
    a single persistent connection.  It is _impossible_ for another
    connection to interfere, because other connections can't even
    see the cart.

    If you can associate a process with each cart, and can accept the
    overheads of having a DB connection for each cart that is in
    progress, this ought to be pretty slick.  Cart tables pass in and
    out of existence, cleaning themselves up as needed.  Quite cool.
    But you can't ever use connection pooling, which may be
    unacceptable...

2.  You don't insert directly into the cart/product table; you insert
    into a "product request" table, that is a queue of requests.
    There's a big literature on this; look up "Message Queueing,"
    and perhaps look at IBM's product MQSeries.  (Microsoft made a
    clone called "MSMQ.")

    A single serial process periodically goes through that queue, and
    cleanly moves the data in the queue into the cart/product table.

    That means there's some asynchronicity; data may stay in the queue
    for a while, which may be a problem.  Furthermore, there is an
    efficiency loss because every insert has to be done twice; once
    into the queue, and then once into the "real" table.

3.  Look up the notion of "Opportunistic locking."

    This is pretty big these days in Java and Smalltalk applications;
    I won't bother explaining it Yet Again.  If your application is
    getting hammered because big long expensive transactions doing
    lots of updates are failing at COMMIT point due to uniqueness
    constraints, OL can cut the cost.

All these approaches have a big impact on application design.  And I
don't see them being _fundamentally_ better than just using the UNIQUE
index.
--
let name="cbbrowne" and tld="acm.org" in String.concat "@" [name;tld];;
http://cbbrowne.com/info/advocacy.html
"Ahhh. A man with a sharp wit.  Someone ought to take it away from him
before he cuts himself." -- Peter da Silva

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: reading vacuum verbosity
Следующее
От: "Carl E. McMillin"
Дата:
Сообщение: Re: Function with RETURN TYPE RECORD Called From JAVA