Обсуждение: pg_dump Performance

Поиск
Список
Период
Сортировка

pg_dump Performance

От
"Ryan Wells"
Дата:
We're having what seem like serious performance issues with pg_dump, and
I hope someone can help.

We have several tables that are used to store binary data as bytea (in
this example image files), but we're having similar time issues with
text tables as well.

In my most recent test, the sample table was about 5 GB in 1644 rows,
with image files sizes between 1 MB and 35 MB.  The server was a 3.0 GHz
P4 running WinXP, with 2 GB of ram, the backup stored to a separate disk
from the data, and little else running on the sytem.

We're doing the following:

pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
"backupTest.backup" -t "public"."images" db_name

In the test above, this took 1hr 45min to complete.  Since we expect to
have users with 50-100GB of data, if not more, backup times that take
nearly an entire day are unacceptable.

We think there must be something we're doing wrong, but none of our
searches have turned up anything (we likely just don't know the right
search terms).  Hopefully, either there's a server setting or pg_dump
option we need to change, but we're open to design changes if necessary.

Can anyone who has dealt with this before advise us?

Thanks!
Ryan






Re: pg_dump Performance

От
"Ryan Wells"
Дата:
Sorry for the double post.  Our email server had some problems
overnight.  Feel free to ignore this.  We're still working on the issue
using suggestions from last week, and we're seeing some improvements.

Ryan

-----Original Message-----
From: Ryan Wells
Sent: Friday, April 11, 2008 9:42 PM
To: pgsql-admin@postgresql.org
Subject: pg_dump Performance


We're having what seem like serious performance issues with pg_dump, and
I hope someone can help.

We have several tables that are used to store binary data as bytea (in
this example image files), but we're having similar time issues with
text tables as well.

In my most recent test, the sample table was about 5 GB in 1644 rows,
with image files sizes between 1 MB and 35 MB.  The server was a 3.0 GHz
P4 running WinXP, with 2 GB of ram, the backup stored to a separate disk
from the data, and little else running on the sytem.

We're doing the following:

pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
"backupTest.backup" -t "public"."images" db_name

In the test above, this took 1hr 45min to complete.  Since we expect to
have users with 50-100GB of data, if not more, backup times that take
nearly an entire day are unacceptable.

We think there must be something we're doing wrong, but none of our
searches have turned up anything (we likely just don't know the right
search terms).  Hopefully, either there's a server setting or pg_dump
option we need to change, but we're open to design changes if necessary.

Can anyone who has dealt with this before advise us?

Thanks!
Ryan