Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
От | Fabien COELHO |
---|---|
Тема | Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors |
Дата | |
Msg-id | alpine.DEB.2.20.1707141344070.20175@lancre обсуждение исходный текст |
Ответ на | Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors (Marina Polyakova <m.polyakova@postgrespro.ru>) |
Ответы |
Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
|
Список | pgsql-hackers |
Hello Marina, >> Not necessarily? It depends on where the locks triggering the issue >> are set, if they are all set after the savepoint it could work on a >> second attempt. > > Don't you mean the deadlock failures where can really help rollback to Yes, I mean deadlock failures can rollback to a savepoint and work on a second attempt. > And could you, please, give an example where a rollback to savepoint can > help to end its subtransaction successfully after a serialization > failure? I do not know whether this is possible with about serialization failures. It might be if the stuff before and after the savepoint are somehow unrelated... > [...] I mean that the sum of transactions with serialization failure and > transactions with deadlock failure can be greater then the totally sum > of transactions with failures. Hmmm. Ok. A "failure" is a transaction (in the sense of pgbench) that could not made it to the end, even after retries. If there is a rollback and the a retry which works, it is not a failure. Now deadlock or serialization errors, which trigger retries, are worth counting as well, although they are not "failures". So my format proposal was over optimistic, and the number of deadlocks and serializations should better be on a retry count line. Maybe something like: ... number of failures: 12 (0.004%) number of retries: 64 (deadlocks: 29, serialization: 35) -- Fabien.
В списке pgsql-hackers по дате отправления: