Обсуждение: backup with blobs
I'm currently using postgreSQL 7.0.2 and have big tables with a lot of blobs. pg_dump does not cover this blobs. When my system crashes or when I want to migrate to 7.1, will a pure file-backup be enough or will I have to write my own tools for backup and restore of this blobs and their oid's in the corresponding tables. thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: pilsl@goldfisch.at sms : pilsl@max.mail.at pgp-key available
Peter Pilsl <pilsl@goldfisch.at> writes:
> I'm currently using postgreSQL 7.0.2 and have big tables with a lot of
> blobs. pg_dump does not cover this blobs.
> When my system crashes or when I want to migrate to 7.1, will a pure
> file-backup be enough or will I have to write my own tools for backup
> and restore of this blobs and their oid's in the corresponding tables.
No need to re-invent the wheel; use the attached.
BTW, 7.1's pg_dump is capable of dumping BLOBs. There has been talk of
hacking it up so that it could be used against a 7.0 database, which
would probably be a nicer solution than the attached code, but AFAIK
that's not done yet.
regards, tom lane
Вложения
On Wed, Apr 18, 2001 at 06:07:33PM -0400, Tom Lane wrote: > Peter Pilsl <pilsl@goldfisch.at> writes: > > I'm currently using postgreSQL 7.0.2 and have big tables with a lot of > > blobs. pg_dump does not cover this blobs. > > > When my system crashes or when I want to migrate to 7.1, will a pure > > file-backup be enough or will I have to write my own tools for backup > > and restore of this blobs and their oid's in the corresponding tables. > > No need to re-invent the wheel; use the attached. > Thanx to Tom for this great tool. Unfortunately it does not work in my testenvironment. When importing the blobs again, I always get the following error: lupo:/tmp/pgdump_lo # pg_dumplo -i -d peter -s ./ -u user -p pass 66611 blobtest lo_oid peter/blobtest/lo_oid/66611 66707 blobtest lo_oid peter/blobtest/lo_oid/66707 66867 blobtest lo_oid peter/blobtest/lo_oid/66867 <skip approx.10 lines> 70579 blobtest lo_oid peter/blobtest/lo_oid/70579 70643 blobtest lo_oid peter/blobtest/lo_oid/70643 pg_dumplo: lo_import: can't create inv object for ".//peter/blobtest/lo_oid/70643" pg_dumplo: ROLLBACK When I comment the 70653-blob in the index-file, the error will come up at the next blob. When I try very often, the error will come up at an earlier blob or later. Splitting the indexfile and running pg_dumplo -i on each chunk solves the problem. Anyone has any idea ? Is there any webpage about this tool or should I contact the author ? thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: pilsl@goldfisch.at sms : pilsl@max.mail.at pgp-key available
Peter Pilsl <pilsl@goldfisch.at> writes:
> pg_dumplo: lo_import: can't create inv object for ".//peter/blobtest/lo_oid/70643"
> When I comment the 70653-blob in the index-file, the error will come
> up at the next blob. When I try very often, the error will come up at
> an earlier blob or later.
Hmm. About how many blobs are you able to import before the error
happens?
I believe that 7.0.* and before have some (platform dependent) limits on
the number of large objects touched in a single transaction. It could
be that that's what you're running up against. This problem is gone in
7.1.
regards, tom lane
On Thu, Apr 19, 2001 at 08:41:14PM -0400, Tom Lane wrote: > Peter Pilsl <pilsl@goldfisch.at> writes: > > pg_dumplo: lo_import: can't create inv object for ".//peter/blobtest/lo_oid/70643" > > > When I comment the 70653-blob in the index-file, the error will come > > up at the next blob. When I try very often, the error will come up at > > an earlier blob or later. > > Hmm. About how many blobs are you able to import before the error > happens? > > I believe that 7.0.* and before have some (platform dependent) limits on > the number of large objects touched in a single transaction. It could > be that that's what you're running up against. This problem is gone in > 7.1. > It depends ... from 5 to 15 I guess. I'll write a wrapper around to split the stuff in small chunks and post it here if I get it to work ... Problem is: I need a working blob-backup before migrating to 7.1 or I'll loose em while upgrading (There are 1000's of blobs I need to process here ...) thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: pilsl@goldfisch.at sms : pilsl@max.mail.at pgp-key available
Peter Pilsl <pilsl@goldfisch.at> writes:
>> Hmm. About how many blobs are you able to import before the error
>> happens?
> It depends ... from 5 to 15 I guess.
Only that many? Something's broken then. The limits I was thinking of
were on the order of thousands of blobs touched in a transaction.
There is probably additional info about the error showing up in the
postmaster log; would you look there and see what it says?
regards, tom lane
On Fri, Apr 20, 2001 at 02:12:23AM +0200, Peter Pilsl wrote:
> On Wed, Apr 18, 2001 at 06:07:33PM -0400, Tom Lane wrote:
> >
> > No need to re-invent the wheel; use the attached.
> >
>
> Thanx to Tom for this great tool.
Sure :-)
> Unfortunately it does not work in my testenvironment.
> When importing the blobs again, I always get the following error:
>
> lupo:/tmp/pgdump_lo # pg_dumplo -i -d peter -s ./ -u user -p pass
> 66611 blobtest lo_oid peter/blobtest/lo_oid/66611
> 66707 blobtest lo_oid peter/blobtest/lo_oid/66707
> 66867 blobtest lo_oid peter/blobtest/lo_oid/66867
> <skip approx.10 lines>
> 70579 blobtest lo_oid peter/blobtest/lo_oid/70579
> 70643 blobtest lo_oid peter/blobtest/lo_oid/70643
> pg_dumplo: lo_import: can't create inv object for ".//peter/blobtest/lo_oid/70643"
>
> pg_dumplo: ROLLBACK
What do you do before this import?
pg_dumplo -i (without -r) add *new* LO to DB and update LO oid in some
tab.attr only. The option '-r' remove old LO and import new.
Try:
pg_dumplo -a -d my_db -s /my_dump/dir
pg_dump [with relevant options for your DB] > my.dump
DROP DATABASE my_db;
CREATE DATABASE my_db;
psql my_db < my.dump
pg_dumplo -i -d my_db -s /my_dump/dir
Karel
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
On Fri, Apr 20, 2001 at 10:27:11AM +0200, Karel Zak wrote: > > > > Thanx to Tom for this great tool. > > Sure :-) > sorry !! I ment to say, Thnx for giving this tool to me. Even more thanx for writing this great tool to you !! :) > > pg_dumplo -i (without -r) add *new* LO to DB and update LO oid in some > tab.attr only. The option '-r' remove old LO and import new. > > Try: > > pg_dumplo -a -d my_db -s /my_dump/dir > pg_dump [with relevant options for your DB] > my.dump > DROP DATABASE my_db; > CREATE DATABASE my_db; > psql my_db < my.dump > pg_dumplo -i -d my_db -s /my_dump/dir > when working on an own database things works perfect. However I have a reproduceable problem on a different database when importing blobs into a single table while the old blobs still existing (no matter if using -r or not) However: the problem is not happening on a new created database, so maybe there is a problem on my system. I will install 7.1 on a different machine and try if I can import all my data before migrating to 7.1 on a production-server. When I want to migrate _all_ data from one 7.1 to another 7.1 : does a brute filecopy do it ? Thanx a lot for your help, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: pilsl@goldfisch.at sms : pilsl@max.mail.at pgp-key available
On Fri, Apr 20, 2001 at 11:37:13AM +0200, Peter Pilsl wrote:
> On Fri, Apr 20, 2001 at 10:27:11AM +0200, Karel Zak wrote:
> > Try:
> >
> > pg_dumplo -a -d my_db -s /my_dump/dir
> > pg_dump [with relevant options for your DB] > my.dump
> > DROP DATABASE my_db;
> > CREATE DATABASE my_db;
> > psql my_db < my.dump
> > pg_dumplo -i -d my_db -s /my_dump/dir
> >
>
> when working on an own database things works perfect. However I have a
> reproduceable problem on a different database when importing blobs
> into a single table while the old blobs still existing (no matter if
> using -r or not)
Do you have right permissions for DB and dirs with LO dumps?
> However: the problem is not happening on a new created database, so
> maybe there is a problem on my system.
Hmm.. may be, sounds curious if everythig is right on mew DB.
> I will install 7.1 on a different machine and try if I can import all
> my data before migrating to 7.1 on a production-server.
The LO dump format is same for 7.0 and 7.1. If you use pg_dumplo from
7.1 for LO data from 7.0 you probably import all without problems.
> When I want to migrate _all_ data from one 7.1 to another 7.1 : does a
> brute filecopy do it ?
Means "brute filecopy" copy backend store files? IMHO it's really brutal
way, more standard is dump out and import all back to new DB.
The pg_dump in 7.1 dumping LO too. My pg_dumplo is a crutch for old PG
versions and for export/import without spec. oid operations (like new
pg_dump)
Karel
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
On Fri, Apr 20, 2001 at 12:48:36PM +0200, Karel Zak wrote: > > > > when working on an own database things works perfect. However I have a > > reproduceable problem on a different database when importing blobs > > into a single table while the old blobs still existing (no matter if > > using -r or not) > > Do you have right permissions for DB and dirs with LO dumps? > yes, I'm in real god-mode. > > However: the problem is not happening on a new created database, so > > maybe there is a problem on my system. > > Hmm.. may be, sounds curious if everythig is right on mew DB. > It is curious and I dont want play around any more on my testmachine. Later in the weekend I will jump into the production-server and dump all the stuff and try to restore the data in a new installed 7.1-environment. I'll post my success/failure here then. thnx, peter -- mag. peter pilsl phone: +43 676 3574035 fax : +43 676 3546512 email: pilsl@goldfisch.at sms : pilsl@max.mail.at pgp-key available