Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

Поиск
Список
Период
Сортировка
От Fabien COELHO
Тема Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Дата
Msg-id alpine.DEB.2.20.1707031430380.15247@lancre
обсуждение исходный текст
Ответ на Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Marina Polyakova <m.polyakova@postgrespro.ru>)
Ответы Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Marina Polyakova <m.polyakova@postgrespro.ru>)
Список pgsql-hackers
>>>> The number of retries and maybe failures should be counted, maybe with
>>>> some adjustable maximum, as suggested.
>>> 
>>> If we fix the maximum number of attempts the maximum number of failures 
>>> for one script execution will be bounded above 
>>> (number_of_transactions_in_script * maximum_number_of_attempts). Do you 
>>> think we should make the option in program to limit this number much more?
>> 
>> Probably not. I think that there should be a configurable maximum of
>> retries on a transaction, which may be 0 by default if we want to be
>> upward compatible with the current behavior, or maybe something else.
>
> I propose the option --max-attempts-number=NUM which NUM cannot be less than 
> 1. I propose it because I think that, for example, --max-attempts-number=100 
> is better than --max-retries-number=99. And maybe it's better to set its 
> default value to 1 too because retrying of shell commands can produce new 
> errors..

Personnaly, I like counting retries because it also counts the number of 
time the transaction actually failed for some reason. But this is a 
marginal preference, and one can be switchted to the other easily.

-- 
Fabien.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during"make check"
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during"make check"