On Wed, Mar 30, 2005 at 05:41:04PM -0500, Greg Stark wrote:
>
> Alvaro Herrera <alvherre@dcc.uchile.cl> writes:
>
> > Now this can't be applied right away because it's easy to run "out of
> > memory" (shared memory for the lock table). Say, a delete or update
> > that touches 10000 tuples does not work. I'm currently working on a
> > proposal to allow the lock table to spill to disk ...
>
> Is that true even if I'm updating/deleting 1,000 tuples that all reference the
> same foreign key? It seems like that should only need a single lock per
> (sub)transaction_id per referenced foreign key.
Well, in that case you need 1000 PROCLOCK objects, all pointing to the
same LOCK object. But it still uses shared memory.
> How is this handled currently? Is your patch any worse than the current
> behaviour?
With my patch it's useless without a provision to spill the lock table.
The current situation is that we don't use the lock table to lock
tuples; instead we mark them on disk, in the tuple itself. So we can't
really mark a tuple more than once (because we have only one bit to
mark); that's why we limit tuple locking to exclusive locking (there's
no way to mark a tuple with more than one shared lock).
With my patch we need a lot of memory for each tuple locked. This needs
to be shared memory. Since shared memory is limited, we can't grab an
arbitrary number of locks simultaneously. Thus, deleting a whole table
can fail. You haven't ever seen Postgres failing in a DELETE FROM
table, have you?
--
Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)
"Java is clearly an example of a money oriented programming" (A. Stepanov)