The best performance for moving postgres data is to use parallel pg_dump and pg_restore with no compression, in your situation I would use parallel level of 8.
Order of events:
Move the structure of the database only to the destination server from a pg_dump. Use the disable trigger option and make sure triggers are remain disabled on destination.
Take another pg_dump in parallel and only get the data.
Create 3 script to capture all primary key indexes, domain indexes and foreign key indexes
Create 3 scripts to drop the 3 index types mentioned above.
Create 3 scripts to create all index types mentioned above.
Change your memory to 256 GB and set work_mem=4GB. Each session has lots of memory for sorting to build indexes
Drop all indexes, constraints and triggers
Do the pg_restore of the data
Verify the counts
Run the rebuild index scripts.
This takes some up front scripting but 2 Terabytes should be done in a few hours, depending on network. Remember no compression on the pg_dump and pg_restore file sets.
Hi,
I am working on migration of Aurora PostgreSQL to Ec2 with community postgresql.
I am using postgresql 12 version. Both the machines has 128 GB of ram and 32 core cpu.
I have set the parallel processes for logical replication as 12.
I am able to copy data at speed of around 8 MB/sec using pglogical.
I have dropped all indexes/triggers/constraints on destination except primary key. The table size is 2 TB. Is there any way I can improve the performance ?
Thanks