Hi,
Digging deeper into source code of backend/executor/execIndexing.c, I
found following:
* Exclusion Constraints
* ---------------------
...
*
* There is a chance of deadlock, if two backends insert a tuple at the
* same time, and then perform the scan to check for conflicts. They will
* find each other's tuple, and both try to wait for each other. The
* deadlock detector will detect that, and abort one of the transactions.
* That's fairly harmless, as one of them was bound to abort with a
* "duplicate key error" anyway, although you get a different error
* message.
I guess that my example has this deadlock...
I'm not so happy with "That's fairly harmless". In my case I'm processing
messages from several sessions at a rate of more than 1000 messages per
second per session.
With default deadlock timeout of 1 second, at least two sessions may block
and this may have impact on other sessions which may cascade into a lot of
sessions being in a dead-lock. I had a situation where throughput dropped
to 6 messages per second....
Very simple example of this problem:
1. lookup a record for entity E using exclusion constraint
2. if it does not exist: insert the record
3. If this insert fails, back to 1.
If n sessions try to execute this in parallel, the wait time will be n-1
seconds.
Any chance in changing current Postgresql behavior?
Regards,
Mark