Обсуждение: Import Database

Поиск
Список
Период
Сортировка

Import Database

От
Sathish Kumar
Дата:
Hi,

I am trying to import a database of size 300+gb to another server. It's taking a long time.

4vCPU
15GB RAM
psql -h newserver -U dbuser -d production -W < prod.sql


Is there a way to speed up the importing process by tweaking Postgresql config like maintenance_workmem, work_mem, shared_buffers etc.,

Re: Import Database

От
Ravi Krishna
Дата:
IMO you are using the slowest tool to import.

Just one quick question: Why can't you take cluster backup using any of the tools available and then drop all
unwanted databases after you import the cluster.

pg_basebackup will do a good job.


Re: Import Database

От
Andreas Kretschmer
Дата:

Am 05.05.19 um 18:47 schrieb Sathish Kumar:
> Is there a way to speed up the importing process by tweaking 
> Postgresql config like maintenance_workmem, work_mem, shared_buffers etc.,

sure, take the dump in custom-format and use pg_restore with -j 
<number_of_jobs>. You can increase maintenance_work_mem maybe to 4GB to 
speedup index creation.


Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Import Database

От
Ron
Дата:
On 5/5/19 12:20 PM, Andreas Kretschmer wrote:
>
>
> Am 05.05.19 um 18:47 schrieb Sathish Kumar:
>> Is there a way to speed up the importing process by tweaking Postgresql 
>> config like maintenance_workmem, work_mem, shared_buffers etc.,
>
> sure, take the dump in custom-format and use pg_restore with -j 
> <number_of_jobs>.

Custom format or directory format?

-- 
Angular momentum makes the world go 'round.



Re: [External] Re: Import Database

От
Vijaykumar Jain
Дата:
Yes.
I do bump up maintenance_work_mem temporarily during a restore.
it helps in rebuilding on indexes a little faster.
Turning fsync off during restore will help the restore a little fast too but in case of any crash you may have to restart the restore from scratch.
Also do have the option to take pg_dump and run pg_restore ? or you just the have the raw sql dump to work with?
if you have the option of taking a dump again, you can try using pg_dump and pg_restore with -Fc (custom format) and -j n (parallel)  option along with temp bump in maint memory.
This will make the restore a little faster that raw sql dump I think.
If you are on pg10 or above? you can use logical replication to mirror  the database.
There are blogs by several people explaining how to do that, that may be helpful.


On Sun, 5 May 2019 at 10:29 PM Ravi Krishna <ravi_krishna@aol.com> wrote:
IMO you are using the slowest tool to import.

Just one quick question: Why can't you take cluster backup using any of the tools available and then drop all
unwanted databases after you import the cluster.

pg_basebackup will do a good job.

--

Regards,
Vijay

Re: [External] Re: Import Database

От
Sathish Kumar
Дата:
Hi,

I am trying to export our database in GCE instance to Google Cloud SQL.

Below are the commands used to export/import the database. I am exporting only 1 database which is required.

Export:
pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl production     | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > prod.sql

Import:
psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql

On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain <vjain@opentable.com> wrote:
Yes.
I do bump up maintenance_work_mem temporarily during a restore.
it helps in rebuilding on indexes a little faster.
Turning fsync off during restore will help the restore a little fast too but in case of any crash you may have to restart the restore from scratch.
Also do have the option to take pg_dump and run pg_restore ? or you just the have the raw sql dump to work with?
if you have the option of taking a dump again, you can try using pg_dump and pg_restore with -Fc (custom format) and -j n (parallel)  option along with temp bump in maint memory.
This will make the restore a little faster that raw sql dump I think.
If you are on pg10 or above? you can use logical replication to mirror  the database.
There are blogs by several people explaining how to do that, that may be helpful.


On Sun, 5 May 2019 at 10:29 PM Ravi Krishna <ravi_krishna@aol.com> wrote:
IMO you are using the slowest tool to import.

Just one quick question: Why can't you take cluster backup using any of the tools available and then drop all
unwanted databases after you import the cluster.

pg_basebackup will do a good job.

--

Regards,
Vijay

Re: [External] Re: Import Database

От
Sathish Kumar
Дата:
Hi All,

Postgresql version: 9.6

On Mon, May 6, 2019, 7:14 AM Sathish Kumar <satcse88@gmail.com> wrote:
Hi,

I am trying to export our database in GCE instance to Google Cloud SQL.

Below are the commands used to export/import the database. I am exporting only 1 database which is required.

Export:
pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl production     | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' > prod.sql

Import:
psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql

On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain <vjain@opentable.com> wrote:
Yes.
I do bump up maintenance_work_mem temporarily during a restore.
it helps in rebuilding on indexes a little faster.
Turning fsync off during restore will help the restore a little fast too but in case of any crash you may have to restart the restore from scratch.
Also do have the option to take pg_dump and run pg_restore ? or you just the have the raw sql dump to work with?
if you have the option of taking a dump again, you can try using pg_dump and pg_restore with -Fc (custom format) and -j n (parallel)  option along with temp bump in maint memory.
This will make the restore a little faster that raw sql dump I think.
If you are on pg10 or above? you can use logical replication to mirror  the database.
There are blogs by several people explaining how to do that, that may be helpful.


On Sun, 5 May 2019 at 10:29 PM Ravi Krishna <ravi_krishna@aol.com> wrote:
IMO you are using the slowest tool to import.

Just one quick question: Why can't you take cluster backup using any of the tools available and then drop all
unwanted databases after you import the cluster.

pg_basebackup will do a good job.

--

Regards,
Vijay

Re: Import Database

От
Andreas Kretschmer
Дата:

Am 05.05.19 um 19:26 schrieb Ron:
> On 5/5/19 12:20 PM, Andreas Kretschmer wrote:
>>
>>
>> Am 05.05.19 um 18:47 schrieb Sathish Kumar:
>>> Is there a way to speed up the importing process by tweaking 
>>> Postgresql config like maintenance_workmem, work_mem, shared_buffers 
>>> etc.,
>>
>> sure, take the dump in custom-format and use pg_restore with -j 
>> <number_of_jobs>.
>
> Custom format or directory format?

both are possible for that.

Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Import Database

От
Sathish Kumar
Дата:
Hi All,

Is there any other option to restore faster using psql, since I have to export it as plain text dump.

--format=plain

Only plain SQL format is supported by Cloud SQL.



I cannot use pgrestore option for plain text sql dump restore.

On Mon, May 6, 2019, 6:35 PM Andreas Kretschmer <andreas@a-kretschmer.de> wrote:


Am 05.05.19 um 19:26 schrieb Ron:
> On 5/5/19 12:20 PM, Andreas Kretschmer wrote:
>>
>>
>> Am 05.05.19 um 18:47 schrieb Sathish Kumar:
>>> Is there a way to speed up the importing process by tweaking
>>> Postgresql config like maintenance_workmem, work_mem, shared_buffers
>>> etc.,
>>
>> sure, take the dump in custom-format and use pg_restore with -j
>> <number_of_jobs>.
>
> Custom format or directory format?

both are possible for that.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com