pg_dump Performance

Поиск
Список
Период
Сортировка
От Ryan Wells
Тема pg_dump Performance
Дата
Msg-id EE6D03C0EF73D14E8034C37CA9B627742AD5@exchange.DOCS.COM
обсуждение исходный текст
Ответы Re: pg_dump Performance  ("Ryan Wells" <ryan.wells@soapware.com>)
Список pgsql-admin
We're having what seem like serious performance issues with pg_dump, and
I hope someone can help.

We have several tables that are used to store binary data as bytea (in
this example image files), but we're having similar time issues with
text tables as well.

In my most recent test, the sample table was about 5 GB in 1644 rows,
with image files sizes between 1 MB and 35 MB.  The server was a 3.0 GHz
P4 running WinXP, with 2 GB of ram, the backup stored to a separate disk
from the data, and little else running on the sytem.

We're doing the following:

pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
"backupTest.backup" -t "public"."images" db_name

In the test above, this took 1hr 45min to complete.  Since we expect to
have users with 50-100GB of data, if not more, backup times that take
nearly an entire day are unacceptable.

We think there must be something we're doing wrong, but none of our
searches have turned up anything (we likely just don't know the right
search terms).  Hopefully, either there's a server setting or pg_dump
option we need to change, but we're open to design changes if necessary.

Can anyone who has dealt with this before advise us?

Thanks!
Ryan






В списке pgsql-admin по дате отправления:

Предыдущее
От: "Pavan Deolasee"
Дата:
Сообщение: Re: autovacuum?
Следующее
От: David Jorjoliani
Дата:
Сообщение: Re: Database update problem from crontab on ubuntu server