Re: [ADMIN] Backup of db with large objects.
От | Jeroen Elassaiss-Schaap |
---|---|
Тема | Re: [ADMIN] Backup of db with large objects. |
Дата | |
Msg-id | 38B25033.A7E5C874@lumc.nl обсуждение исходный текст |
Список | pgsql-admin |
> > Date: Mon, 21 Feb 2000 11:10:44 +0000 (GMT) > From: M.Mazurek@poznan.multinet.pl > Subject: Re: [ADMIN] Backup of db with large objects. > > On Mon, 21 Feb 2000, Karel Zak - Zakkr wrote: > > > > pg_dump doesn't support large objects, I haven't found any advise how > > > > to do it in doc neither in FAQ. How an I to backup a db with large > > > > objects without losing my data? > > > You need to write a C program using libpq's lo_import and lo_export > > > functions. > > > > ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_dumplo-0.0.3.tar.gz > > > > (..still available for PG's contrib, Wanted?) > Frankly speaking I was surprised theres no such a thing at least in > contrib... unlesss there are some strong agruments for not doing it. Maybe this is helpful also, I've made a perlscript to accomplish the same. It stores lo's in a tarball, and restores them using the oid's, probably much alike the C program. I use it for my everyday backups, but my databases are not that large: -----------getlo------------ #! /usr/bin/perl # GetLO # Jeroen Elassaiss Schaap 20-10-99 # # Perl script to get all large objects from table # # I will adept the 'new style'. See man Pg use Pg; # use postgres extension if (! $ARGV[1]) { syntax(); die "Inproper arguments\n"; } #Some 'constants'. $tempdir = "/tmp/postgres"; $tarfile ="$tempdir/$ARGV[0].tar"; $dbname = $ARGV[0]; $tablename_input = $ARGV[1]; $conn = Pg::connectdb("dbname=$dbname"); #Connects to $dbname check_result(PGRES_CONNECTION_OK,$conn->status,"Connected to $dbname"); # Get those values from the table $excstr="SELECT data". " FROM $tablename_input"; $result = $conn->exec($excstr); check_result(PGRES_TUPLES_OK, $result->resultStatus, "Fetched record from $tablename_input."); $ntuples=$result->ntuples; for ($index=0;$index<$ntuples;$index++){ $oid = get_record($result->getvalue($index,0)); } system('gzip',"$tarfile"); #Subroutine for checking status of postgres-command/connection sub check_result { my $cmp = shift; my $ret = shift; my $msg = shift; #import the passed variables my $errmsg; if ("$cmp" eq "$ret") { #compare } else { $errmsg = $conn->errorMessage; print "Error trying\n\t$msg\nError\n\t$errmsg\n"; exit; } } #Subroutine for getting the values from the database. sub get_record{ my $oid = shift; # Get those values from the table $excstr="SELECT lo_export(data,'$tempdir/$oid' )". "from $tablename_input where data=$oid"; $result2 = $conn->exec($excstr); check_result(PGRES_TUPLES_OK, $result->resultStatus, "Fetched record from $tablename_input."); system("tar","-C","$tempdir","-rf","$tarfile","$oid"); unlink($tempdir.'/'.$oid); return $oid; } sub syntax { print <<'EOS'; getlo J. Elassaiss-Schaap 20/10/99 usage: getlo <name of database> <name of table> name of table has to have its oid's in column 'data' EOS } --------end-of-getlo I checked and saw that the other component, the script to restore the large objects, still is an ugly hack only suitable for my own database. If asked, I will clean it up and post it also. Cheers, Jeroen -- J. Elassaiss-Schaap Dept. Physiology Leiden University Medical Centre Postbus 9604 2300 RC Leiden The Netherlands tel: 071-5276811 fax: 071-5276782 E-mail: j.schaap@physiology.medfac.leidenuniv.nl
В списке pgsql-admin по дате отправления:
Предыдущее
От: Jeroen Elassaiss-SchaapДата:
Сообщение: Re: [ADMIN] Backup of db with large objects.