Native Logical Replication Initial Import Qs

Поиск
Список
Период
Сортировка
От Don Seiler
Тема Native Logical Replication Initial Import Qs
Дата
Msg-id CAHJZqBB7JuhXca=EE1vJqFU_Ft6uahPDj-Bj4wuSpntZzgdf_g@mail.gmail.com
обсуждение исходный текст
Ответы Re: Native Logical Replication Initial Import Qs
Список pgsql-general
Good afternoon.

I'm looking at having to move a fleet of PG 12 databases from Ubuntu 18.04 to Ubuntu 22.04. This means crossing the dreaded libc collation change, so we're looking to have to migrate via pg_dump/restore or logical replication for the bigger/busier ones. We're also planning to use PG 15 on the destination (Ubuntu 22.04) side to kill two birds with one stone, as much as I'd prefer to have minimal moving parts.

On the logical replication front, the concern is with the initial data import that happens when the subscription is created (by default). I know that you can tell the subscription to not copy data and instead use pg_dump and a replication slot snapshot to achieve this manually. However I'm unable to explain (to myself) why this is better than just having the subscription do it upon creation. Given that I can create pub/sub sets for individual tables for parallel operations, I'm curious what advantages there are in using pg_dump to do this import.

I had been planning to have pg_dump pipe directly into the destination database via psql. Is this faster than just having the subscription do the import? I'm curious as to why or not. I know to only use the minimal indexes required on the destination side (ie identity-related indexes) and omit other indexes and constraints until after the data is loaded, but that is true for either method.

Thanks,
Don.

--
Don Seiler
www.seiler.us

В списке pgsql-general по дате отправления:

Предыдущее
От: Lorusso Domenico
Дата:
Сообщение: Re: How to manipulate field in New record
Следующее
От: Jeremy Schneider
Дата:
Сообщение: Re: Native Logical Replication Initial Import Qs