Backup/dump of huge tables and performance

Поиск
Список
Период
Сортировка
От brianb-pggeneral@edsamail.com
Тема Backup/dump of huge tables and performance
Дата
Msg-id 20000728022636.22916.qmail@mail01.edsamail.com.ph
обсуждение исходный текст
Ответы Re: Backup/dump of huge tables and performance  (Philip Warner <pjw@rhyme.com.au>)
Список pgsql-general
Hello,

I have a large table, the datafiles of which are already several GB in
size. It is no longer being used in transactions. I would like to back them
up to migrate them to another server, but there are several considerations:

1. pg_dump or COPY take up a lot of I/O resources. That's not surprising
considering the size of the tables, but the impact on the overall
production system's performance is not acceptable.

2. I don't know how to estimate the size of the resulting dump files from
the database files. Even running a SELECT COUNT(*) on the table slows the
entire system down considerably, so I don't know how many rows there are
(last I checked, it was a couple million, and it's definitely in the tens
of millions). What happens when I hit Linux's 2GB filesize limit?

I would very much prefer to have the backup files in little 10-20MB chunks,
rather than one humongous dumpfile. I also want to be able to run the
backup without shutting down the service that uses the database.

As noted above, this particular table is no longer being used in
transactions. I figure I could write a C program to declare a cursor and
write it out in n-MB or m-thousand row chunks, with rest pauses in between.

Any better ideas? Would this be a good feature to incorporate into future
versions of pg_dump?

Brian
--
Brian Baquiran <brianb@edsamail.com>
http://www.baquiran.com/  AIM: bbaquiran
Work: +63(2)7182222       Home: +63(2) 9227123

В списке pgsql-general по дате отправления:

Предыдущее
От: Thomas Lockhart
Дата:
Сообщение: Re: 4 billion record limit?
Следующее
От: Thomas Lockhart
Дата:
Сообщение: Re: Connection problem under extreme load.