Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

Поиск
Список
Период
Сортировка
От Marina Polyakova
Тема Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Дата
Msg-id fc2d3f13e4c2e4ebe061fb2e26f9f68b@postgrespro.ru
обсуждение исходный текст
Ответ на Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Fabien COELHO <coelho@cri.ensmp.fr>)
Ответы Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Marina Polyakova <m.polyakova@postgrespro.ru>)
Список pgsql-hackers
On 29-03-2018 22:39, Fabien COELHO wrote:
>> Conception of max-retry option seems strange for me. if number of 
>> retries reaches max-retry option, then we just increment counter of 
>> failed transaction and try again (possibly, with different random 
>> numbers).

Then the client starts another script, but by chance or by the number of 
scripts it can be the same.

>> At the end we should distinguish number of error transaction and 
>> failed transaction, to found this difference documentation  suggests 
>> to rerun pgbench with debugging on.

If I understood you correctly, this difference is the total number of 
retries and this is included in all reports.

>> May be I didn't catch an idea, but it seems to me max-tries should be 
>> removed. On transaction searialization or deadlock error pgbench 
>> should increment counter of failed transaction, resets conditional 
>> stack, variables, etc but not a random generator and then start new 
>> transaction for the first line of script.

When I sent the first version of the patch there were only rollbacks, 
and the idea to retry failed transactions was approved (see [1], [2], 
[3], [4]). And thank you, I fixed the patch to reset the client 
variables in case of errors too, and not only in case of retries (see 
attached, it is based on the commit 
3da7502cd00ddf8228c9a4a7e4a08725decff99c).

> ISTM that there is the idea is that the client application should give
> up at some point are report an error to the end user, kind of a
> "timeout" on trying, and that max-retry would implement this logic of
> giving up: the transaction which was intented, represented by a given
> initial random generator state, could not be committed as if after
> some iterations.
> 
> Maybe the max retry should rather be expressed in time rather than
> number of attempts, or both approach could be implemented? But there
> is a logic of retrying the same (try again what the client wanted) vs
> retrying something different (another client need is served).

I'm afraid that we will have a problem in debugging mode: should we 
report a failure (which will be retried) or an error (which will not be 
retried)? Because only after executing the following script commands (to 
rollback this transaction block) we will know the time that we spent on 
the execution of the current script..

[1] 
https://www.postgresql.org/message-id/CACjxUsOfbn72EaH4i_OuzdY-0PUYfg1Y3o8G27tEA8fJOaPQEw%40mail.gmail.com
[2] 
https://www.postgresql.org/message-id/20170615211806.sfkpiy2acoavpovl%40alvherre.pgsql
[3] 
https://www.postgresql.org/message-id/CAEepm%3D3TRTc9Fy%3DfdFThDa4STzPTR6w%3DRGfYEPikEkc-Lcd%2BMw%40mail.gmail.com
[4] 
https://www.postgresql.org/message-id/CACjxUsOQw%3DvYjPWZQ29GmgWU8ZKj336OGiNQX5Z2W-AcV12%2BNw%40mail.gmail.com

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Rowley
Дата:
Сообщение: Re: [HACKERS] path toward faster partition pruning
Следующее
От: Teodor Sigaev
Дата:
Сообщение: Re: Cast jsonb to numeric, int, float, bool