Re: Would it be possible to have parallel archiving?

Поиск
Список
Период
Сортировка
От David Steele
Тема Re: Would it be possible to have parallel archiving?
Дата
Msg-id dbd2184b-987b-7a49-a341-f8c44940bf3a@pgmasters.net
обсуждение исходный текст
Ответ на Re: Would it be possible to have parallel archiving?  (Andrey Borodin <x4mmm@yandex-team.ru>)
Ответы Re: Would it be possible to have parallel archiving?
Список pgsql-hackers
On 8/28/18 4:34 PM, Andrey Borodin wrote:
>>
>> I still don't think it's a good idea and I specifically recommend
>> against making changes to the archive status files- those are clearly
>> owned and managed by PG and should not be whacked around by external
>> processes.
> If you do not write to archive_status, you basically have two options:
> 1. On every archive_command recheck that archived file is identical to file that is already archived. This hurts
performance.
> 2. Hope that files match. This does not add any safety compared to whacking archive_status. This approach is prone to
core changes as writes are.
 

Another option is to maintain the state of what has been safely archived
(and what has errored) locally.  This allows pgBackRest to rapidly
return the status to Postgres without rechecking against the repository,
which as you note would be very slow.

This allows more than one archive_command to be safely run since all
archive commands must succeed before Postgres will mark the segment as done.

It's true that reading archive_status is susceptible to core changes but
the less interaction the better, I think.

Regards,
-- 
-David
david@pgmasters.net


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: some pg_dump query code simplification
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: Would it be possible to have parallel archiving?