Re: WIP Patch: Pgbench Serialization and deadlock errors

Поиск
Список
Период
Сортировка
От Marina Polyakova
Тема Re: WIP Patch: Pgbench Serialization and deadlock errors
Дата
Msg-id 2794acb0ec4c1cf94fa60c182723490c@postgrespro.ru
обсуждение исходный текст
Ответ на Re: WIP Patch: Pgbench Serialization and deadlock errors  (Fabien COELHO <coelho@cri.ensmp.fr>)
Ответы Re: Re: WIP Patch: Pgbench Serialization and deadlock errors
Список pgsql-hackers
On 12-01-2018 18:13, Fabien COELHO wrote:
> Hello Marina,

Hello, Fabien!

>>> If you want 2 transactions, then you have to put them in two scripts,
>>> which looks fine with me. Different transactions are expected to be
>>> independent, otherwise they should be merged into one transaction.
>> 
>> Therefore if the script consists of several single statements (= 
>> several transactions), you cannot retry them. For example, the script 
>> looked like this:
>> 
>> UPDATE xy1 SET x = 1 WHERE y = 1;
>> UPDATE xy2 SET x = 2 WHERE y = 2;
>> UPDATE xy3 SET x = 3 WHERE y = 3;
>> 
>> If this restriction is ok for you, I'll simplify the code :)
> 
> Yes, that is what I'm suggesting. If you want to restart them, you can
> put them in 3 scripts.

Okay, in the next patch I'll simplify the code.

>>> Under these restrictions, ISTM that a retry is something like:
>>> ...
>> 
>> If we successfully complete a failed transaction block and process its 
>> end command in CSTATE_END_COMMAND, we may want to make a retry. So do 
>> you think that in this case it is ok to go to CSTATE_ABORTED at the 
>> end of CSTATE_END_COMMAND?..
> 
> Dunno.
> 
> I'm fine with having END_COMMAND skipping to START_TX if it can be
> done easily and cleanly, esp without code duplication.

If I understand you correctly, I'm not sure that we should skip the 
statistics collector for the command that completes a failed transaction 
block.. + maybe for this command we will have only such statistics.

> ISTM that ABORTED & FINISHED are currently exactly the same. That would
> put a particular use to aborted. Also, there are many points where the
> code may go to "aborted" state, so reusing it could help avoid
> duplicating stuff on each abort decision.

Thanks, I agree with you.

>>> Once this works, maybe it could go a step further by restarting at
>>> savepoints. I'd put restrictions there to ease detecting a savepoint
>>> so that it cannot occur in a compound command for instance. This 
>>> would
>>> probably require a new state. Fine.
>> 
>> We discussed the savepoints earlier in [1]:
> 
> Yep. I'm trying to suggest an incremental path with simple but yet
> quite useful things first.

This question ("if there's a failure what savepoint we should rollback 
to and start the execution again? ...") mostly concerns the basic idea 
of how to maintain the savepoints in this feature, rather than the exact 
architecture of the code, so we can discuss this now :)

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: [HACKERS] Proposal: Local indexes for partitioned table
Следующее
От: Luke Cowell
Дата:
Сообщение: Re: Possible performance regression with pg_dump of a large number ofrelations