Re: Debugging deadlocks
| От | Greg Stark |
|---|---|
| Тема | Re: Debugging deadlocks |
| Дата | |
| Msg-id | 87y8c4wydb.fsf@stark.xeocode.com обсуждение исходный текст |
| Ответ на | Re: Debugging deadlocks (Alvaro Herrera <alvherre@dcc.uchile.cl>) |
| Ответы |
Re: Debugging deadlocks
|
| Список | pgsql-general |
Alvaro Herrera <alvherre@dcc.uchile.cl> writes: > Now this can't be applied right away because it's easy to run "out of > memory" (shared memory for the lock table). Say, a delete or update > that touches 10000 tuples does not work. I'm currently working on a > proposal to allow the lock table to spill to disk ... Is that true even if I'm updating/deleting 1,000 tuples that all reference the same foreign key? It seems like that should only need a single lock per (sub)transaction_id per referenced foreign key. How is this handled currently? Is your patch any worse than the current behaviour? -- greg
В списке pgsql-general по дате отправления: