Re: database-level lockdown

Поиск
Список
Период
Сортировка
От Filipe Pina
Тема Re: database-level lockdown
Дата
Msg-id 271401C5-E8DD-4B27-8C27-7FB0DB9617C2@impactzero.pt
обсуждение исходный текст
Ответ на Re: database-level lockdown  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: database-level lockdown  (Filipe Pina <filipe.pina@impactzero.pt>)
Список pgsql-general
Exactly, that’s why there’s a limit on the retry number. On the last try I wanted something like full lockdown to make
surethe transaction will not fail due to serialiazation failure (if no other processes are touching the database, it
can’thappen). 

So if two transactions were retrying over and over, the first one to reach max_retries would activate that “global
lock”making the other one wait and then the second one would also be able to successfully commit... 

> On 11/06/2015, at 20:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Filipe Pina <filipe.pina@impactzero.pt> writes:
>> It will try 5 times to execute each instruction (in case of
>> OperationError) and in the last one it will raise the last error it
>> received, aborting.
>
>> Now my problem is that aborting for the last try (on a restartable
>> error - OperationalError code 40001) is not an option... It simply
>> needs to get through, locking whatever other processes and queries it
>> needs.
>
> I think you need to reconsider your objectives.  What if two or more
> transactions are repeatedly failing and retrying, perhaps because they
> conflict?  They can't all forcibly win.
>
>             regards, tom lane



В списке pgsql-general по дате отправления:

Предыдущее
От: Manuel Kniep
Дата:
Сообщение: cached row type not invalidated after DDL change
Следующее
От: sym39
Дата:
Сообщение: BDR: Node join and leave questions