I would dare guess, and it seems you suspect as well, that the binary data is why you are not getting very good compression.
You may try dumping the tables individually with
- --table= table
to see which tables are taking the most space in your dump. Once you find out which tables are taking the most space, you can check to see what is in those tables and provide more details on the problem.
Personally I don't use the built in compression in pg_dump but pipe it to gzip instead (not sure if it makes any difference). See
http://manual.intl.indoglobal.com/ch06s07.html for details.
-Aaron
On 6/21/06, Nicola Mauri <Nicola.Mauri@saga.it> wrote:
[sorry if this was previously asked: list searches seem to be down]
I'm using pg_dump to take a full backup of my database using a compressed format:
$ pg_dump -Fc my_db > /backup/my_db.dmp
It produces a 6 GB file whereas the pgdata uses only 5 GB of disk space:
$ ls -l /backup
-rw-r--r-- 6592715242 my_db.dmp
$ du -b /data
5372269196 /data
How could it be?
As far as I know, dumps should be smaller than filesystem datafile since they do not store indexes, etc.
Database contains about one-hundred-thousands binary images, some of which may be already compressed. So i tried the --compress=0 option but this produces a dump that does not fit on my disk (more than 11 GB).
I'm using postgres 8.1.2 on RHEL4.
So, what can I do to diagnose the problem?
Thanks in advance,
Nicola
==================================================================
Aaron Bono
President Aranya Software Technologies, Inc.
http://www.aranya.com We take care of your technology needs.
Phone: (816) 695-6071
==================================================================