Re: pg_basebackup cannot compress to STDOUNT

Поиск
Список
Период
Сортировка
От Support
Тема Re: pg_basebackup cannot compress to STDOUNT
Дата
Msg-id e6e24cc6-9b3f-aafd-b2b4-2902527f2a2f@e-blokos.com
обсуждение исходный текст
Ответ на Re: pg_basebackup cannot compress to STDOUNT  (Paul Förster <paul.foerster@gmail.com>)
Список pgsql-general
On 5/8/2020 11:51 PM, Paul Förster wrote:

> Hi Admin,
>
>> On 08. May, 2020, at 21:31, Support <admin@e-blokos.com> wrote:
>> 2) Command run?
>> ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c -p2 " | pigz -cd -p2 | tar -xf- -C
/usr/local/pgsql/data
> I don't get it, sorry. Do I understand you correctly here that you want an online backup or a *remotely* running
PostgreSQLinstance on your local machine?
 
>
> If so, why not just let pg_basebackup connect remotely and let it do its magic? Something like this:
>
> $ mkdir -p /usr/local/pgsql/data
> $ cd /usr/local/pgsql/data
> $ pg_basebackup -D /run/postgresql -Fp -P -v -h nodeXXX -p 5432 -U replicator
> $ pg_ctl start
>
> You'd have to have a role with replication privs or superuser and you'd have to adapt the port of course.
>
> No need to take care of any WALs manually. It is all taken care of by pg_basebackup. The only real drawback is that
ifyou have tablespaces, you'd have to create all directories of the tablespaces beforehand, which is why we removed
themagain after initially having tried the feature.
 
>
> That's basically, how I create async replicas on out site, which is why I additionally add -R to the above command.
>
> Cheers,
> Paul
>
The trick of my command above is to get the transfer faster in one 
compressed file going through the network.



В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: [GENERAL] import .sql file into PostgreSQL database
Следующее
От: AC Gomez
Дата:
Сообщение: Re: Best way to use trigger to email a report ?