Em 05/11/2017 21:09, Andres Freund escreveu:
> On 2017-11-05 17:38:39 -0500, Robert Haas wrote:
>> On Sun, Nov 5, 2017 at 5:17 AM, Lucas <lucas75@gmail.com> wrote:
>>> The patch creates a "--lock-early" option which will make pg_dump to issue
>>> shared locks on all tables on the backup TOC on each parallel worker start.
>>> That way, the backup has a very small chance of failing. When it does,
>>> happen in the first few seconds of the backup job. My backup scripts (not
>>> included here) are aware of that and retries the backup in case of failure.
>>
>> I wonder why we don't do this already ... and by default.
>
> Well, the current approach afaics requires #relations * 2 locks, whereas
> acquiring them in every worker would scale that with the number of
> workers.
Yes, that is why I proposed as an option. As an option will not affect
anyone that does not want to use it.
> IIUC the problem here is that even though a lock is already
> held by the main backend an independent locker's request will prevent
> the on-demand lock by the dump worker from being granted. It seems to
> me the correct fix here would be to somehow avoid the fairness logic in
> the parallel dump case - although I don't quite know how to best do so.
It seems natural to think several connections in a synchronized snapshot
as the same connection. Then it may be reasonable to grant a shared lock
out of turn if any connection of the same shared snapshot already have a
granted lock for the same relation. Last year Tom mentioned that there
is already queue-jumping logic of that sort in the lock manager for
other purposes. Although seems conceptually simple, I suspect the
implementation is not.
On the other hand, the lock-early option is very simple and has no
impact on anyone that does not want to use it.
---
Lucas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers