Re: Hard problem with concurrency

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: Hard problem with concurrency
Дата
Msg-id 87smunfr16.fsf@stark.dyndns.tv
обсуждение исходный текст
Ответ на Re: Hard problem with concurrency  ("Christopher Kings-Lynne" <chriskl@familyhealth.com.au>)
Ответы Re: Hard problem with concurrency
Список pgsql-hackers
Hm, odd, nobody mentioned this solution:

If you don't have a primary key already, create a unique index on the
combination you want to be unique. Then:

. Try to insert the record
. If you get a duplicate key error then do update instead

No possibilities of duplicate records due to race conditions. If two people
try to insert/update at the same time you'll only get one of the two results,
but that's the downside of the general approach you've taken. It's a tad
inefficient if the usual case is updates, but certainly not less efficient
than doing table locks.

I'm not sure what you're implementing here. Depending on what it is you might
consider having a table of raw data that you _only_ insert into. Then you
process those results into a table with the consolidated data you're trying to
gather. I've usually found that's more flexible later because then you have
all the raw data in the database even if you only present a limited view.

-- 
greg



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kevin Brown
Дата:
Сообщение: Re: location of the configuration files
Следующее
От: "Christopher Kings-Lynne"
Дата:
Сообщение: Re: Hard problem with concurrency