7.1 dumps with large objects

Поиск
Список
Период
Сортировка
От David Wall
Тема 7.1 dumps with large objects
Дата
Msg-id 000901c0c50c$93662c00$5a2b7ad8@expertrade.com
обсуждение исходный текст
Ответ на sets and insert-select with rule  ("Gyozo Papp" <pgerzson@freestart.hu>)
Ответы Re: 7.1 dumps with large objects  (Tom Larard <larard@cs.umb.edu>)
Список pgsql-general
Wonderful job on getting 7.1 released.  I've just installed it in place of a
7.1beta4 database, with the great advantage of not even having to migrate
the database.

It seems that 7.1 is able to handle large objects in its dump/restore
natively now and no longer requires the use of the contrib program to dump
them.  Large objects are represented by OIDs in the table schema, and I'm
trying to make sure that I understand the process correctly from what I've
read in the admin guide and comand reference guide.

In my case, the OID does not mean anything to my programs, and they are not
used as keys.  So I presume that I don't really care about preserving OIDs.
Does this just mean that if I restore a blob, it will get a new OID, but
otherwise everything will be okay?

This is my plan of attack:

To backup my database (I have several databases running in a single
postgresql server, and I'd like to be able to back them up separately since
they could move from one machine to another as the loads increase), I'll be
using:

pg_dump -b -Fc dbname > dbname.dump

Then, to restore, I'd use:

pg_restore -d dbname dbname.dump

Is that going to work for me?

I also noted that pg_dump has a -Z level specifier for compression.  When
not specified, the backup showed a compression level of "-1" (using
pg_restore -l).  Is that the highest compression level, or does that mean it
was disabled?  I did note that the -Fc option created a file that was larger
than a plain file, and not anywhere near as small as if I gzip'ed the
output.  In my case, it's a very small test database, so I don't know if
that's the reason, or whether -Fc by itself doesn't really compress unless
the -Z option is used.

And for -Z, is 0 or 9 the highest level compression?  Is there a particular
value that's generally considered the best tradeoff in terms of speed versus
space?

Thanks,
David


В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: sets and insert-select with rule
Следующее
От: Tom Larard
Дата:
Сообщение: Re: 7.1 dumps with large objects