Обсуждение: pg_dump's over 2GB

Поиск
Список
Период
Сортировка

pg_dump's over 2GB

От
"Bryan White"
Дата:
My current backups made with pg_dump are currently 1.3GB.  I am wondering
what kind of headaches I will have to deal with once they exceed 2GB.

What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the output
exceeds 2GB?
Currently the dump file is later fed to a 'tar cvfz'.  I am thinking that
instead I will need to pipe pg_dumps output into gzip thus avoiding the
creation of a file of that size.

Does anyone have experince with this sort of thing?

Bryan White, ArcaMax.com, VP of Technology
You can't deny that it is not impossible, can you.


Re: pg_dump's over 2GB

От
Adam Haberlach
Дата:
On Fri, Sep 29, 2000 at 12:15:26PM -0400, Bryan White wrote:
> My current backups made with pg_dump are currently 1.3GB.  I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the output
> exceeds 2GB?
> Currently the dump file is later fed to a 'tar cvfz'.  I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>
> Does anyone have experince with this sort of thing?

    We have had some problems with tar silently truncated some > 2Gb files
during a backup.  We also had to move the perforce server from Linux to
BSD because some checkpoint files were truncating at 2Gb (not a perforce
problem, but a Linux one).

    Be careful, test frequently, etc...

--
Adam Haberlach            | A billion hours ago, human life appeared on
adam@newsnipple.com       | earth.  A billion minutes ago, Christianity
http://www.newsnipple.com | emerged.  A billion Coca-Colas ago was
'88 EX500                 | yesterday morning. -1996 Coca-Cola Ann. Rpt.

Re: pg_dump's over 2GB

От
"Steve Wolfe"
Дата:
> My current backups made with pg_dump are currently 1.3GB.  I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?

  There are some ways around it if your program supports it, I'm not sure if
it works with redirects...

> Currently the dump file is later fed to a 'tar cvfz'.  I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.

   Why not just pump the data right into gzip?  Something like:

pg_dumpall | gzip --stdout > pgdump.gz

  (I'm sure that the more efficient shell scripters will know a better way)

  If your data is anything like ours, you will get at least a 5:1
compression ratio, meaning you can actually dump around 10 gigs of data
before you hit the 2 gig file limit.

steve


Re: pg_dump's over 2GB

От
Jeff Hoffmann
Дата:
Bryan White wrote:
>
> I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>

sure, i do it all the time.  unfortunately, i've had it happen a few
times where even gzipping a database dump goes over 2GB, which is a real
PITA since i have to dump some tables individually.  generally, i do
something like
    pg_dump database | gzip > database.pgz
to dump the database and
    gzip -dc database.pgz | psql database
to restore it.  i've always thought that compress should be an option
for pg_dump, but it's really not that much more work to just pipe the
input and output through gzip.

--

Jeff Hoffmann
PropertyKey.com

Re: pg_dump's over 2GB

От
"Ross J. Reedstrom"
Дата:
On Fri, Sep 29, 2000 at 11:41:51AM -0500, Jeff Hoffmann wrote:
> Bryan White wrote:
> >
> > I am thinking that
> > instead I will need to pipe pg_dumps output into gzip thus avoiding the
> > creation of a file of that size.
>
> sure, i do it all the time.  unfortunately, i've had it happen a few
> times where even gzipping a database dump goes over 2GB, which is a real
> PITA since i have to dump some tables individually.  generally, i do


> something like
>     pg_dump database | gzip > database.pgz

Hmm, how about:

pg_dump database | gzip | split -b 1024m - database_

Which will give you 1GB files, named database_aa, database_ab, etc.

> to dump the database and
>     gzip -dc database.pgz | psql database

cat database_* | gunzip | psql database

Ross Reedstrom
--
Open source code is like a natural resource, it's the result of providing
food and sunshine to programmers, and then staying out of their way.
[...] [It] is not going away because it has utility for both the developers
and users independent of economic motivations.  Jim Flynn, Sunnyvale, Calif.