Обсуждение: pg_dump max file size exceeded

Поиск
Список
Период
Сортировка

pg_dump max file size exceeded

От
"Fred Moyer"
Дата:
hey fellow pg'ers.

ran time pg_dump -c --verbose database > datafile.psql from the command line
and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
ideas how to exceed that limit?

redhat 7.2, 2.4.9-31 kernel
postgres 7.2


Re: pg_dump max file size exceeded

От
"Nick Fankhauser"
Дата:
Pipe it into gzip:

pg_dump db_name|gzip>dbname.sql.gz

NickF

--------------------------------------------------------------------------
Nick Fankhauser  nickf@ontko.com  Phone 1.765.935.4283  Fax 1.765.962.9788
Ray Ontko & Co.     Software Consulting Services     http://www.ontko.com/


> -----Original Message-----
> From: pgsql-admin-owner@postgresql.org
> [mailto:pgsql-admin-owner@postgresql.org]On Behalf Of Fred Moyer
> Sent: Tuesday, March 19, 2002 4:39 PM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] pg_dump max file size exceeded
>
>
> hey fellow pg'ers.
>
> ran time pg_dump -c --verbose database > datafile.psql from the
> command line
> and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
> ideas how to exceed that limit?
>
> redhat 7.2, 2.4.9-31 kernel
> postgres 7.2
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>


Re: pg_dump max file size exceeded

От
Tuna Chatterjee
Дата:
hi fred,

i ran into the same problem a couple of weeks ago and realized that the
problem was where the dump was being put instead of trying to deal with
the size of the file.

go ahead and do type 'df' at the command prompt and look at the space
allocated to your partitions.  do the database dump accordingly. i.e. i
found out that /var/tmp was the perfect place to spit out my dbdump.txt
file.

good luck!
tuna

On Tue, 2002-03-19 at 16:39, Fred Moyer wrote:
> hey fellow pg'ers.
>
> ran time pg_dump -c --verbose database > datafile.psql from the command line
> and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
> ideas how to exceed that limit?
>
> redhat 7.2, 2.4.9-31 kernel
> postgres 7.2
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)



Re: pg_dump max file size exceeded

От
Tom Lane
Дата:
"Fred Moyer" <fred@digicamp.com> writes:
> ran time pg_dump -c --verbose database > datafile.psql from the command line
> and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
> ideas how to exceed that limit?

> redhat 7.2, 2.4.9-31 kernel

[ scratches head... ]  If you were on Solaris or HPUX I'd tell you to
recompile with 64-bit file offset support enabled.  But I kinda thought
this was standard equipment on recent Linux versions.  Anyone know the
magic incantation for large-file support on Linux?

            regards, tom lane

Re: pg_dump max file size exceeded

От
Jyry Kuukkanen
Дата:
On Tue, 2002-03-19 at 16:39, Fred Moyer wrote:
> hey fellow pg'ers.
>
> ran time pg_dump -c --verbose database > datafile.psql from the command line
> and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
> ideas how to exceed that limit?
>
> redhat 7.2, 2.4.9-31 kernel
> postgres 7.2


Hello Fred

Are you trying to dump over NFS? NFS has a 2G file size limit and AFAIK
there is no simple way to get around it.

Howver, you could compress you dump on the fly:
pg_dump -c --verbose database |gzip -c >datafile.psql.gz

or

pg_dump -c --verbose database | bzip2 -c >datafile.psql.bz2
(for better compression)

restoring:
zcat datafile.psql.gz | psql database

or

bzcat datafile.psql.gz | psql database


Cheers,
--Jyry
C:-(    C:-/    C========8-O    C8-/    C:-(

Kansainvälisten kisojen tulosteksteissä käytetään
kilpailijoiden nimiin äänestoainetta.



Re: pg_dump max file size exceeded

От
Naomi Walker
Дата:
At 12:15 AM 3/20/02 -0500, Tom Lane wrote:
>"Fred Moyer" <fred@digicamp.com> writes:
> > ran time pg_dump -c --verbose database > datafile.psql from the command
> line
> > and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.  any
> > ideas how to exceed that limit?
>
> > redhat 7.2, 2.4.9-31 kernel
>
>[ scratches head... ]  If you were on Solaris or HPUX I'd tell you to
>recompile with 64-bit file offset support enabled.  But I kinda thought
>this was standard equipment on recent Linux versions.  Anyone know the
>magic incantation for large-file support on Linux?

depending on the shell being used, i'd try limit or ulimit

We've seen a case where large file support had to be tweaked in the Veritas
file systems as well.

--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100  ext 242


Re: pg_dump max file size exceeded

От
"Corey W. Gibbs"
Дата:
Hidey hidey hidey hi,

Have you tried piping the output to gzip then to another file? So
pg_dump -c --verbose database | gzip > /foo/bar.gzip?

I also use ftpbackup to move the gzip file to another server that has a ton
o diskspace and large file support.  Here's a line from teh script

pg_dump CES | gzip | /usr/local/bin/ftpbackup -h bessie -u foo -p bar -b
/lboxbak/$MONTH$DAY.CES.gz

$MONTH and $DAY are set earlier in the script.
hope this helps
~corey

-----Original Message-----
From:    Naomi Walker [SMTP:nwalker@eldocomp.com]
Sent:    Wednesday, March 20, 2002 7:46 AM
To:    Tom Lane; Fred Moyer
Cc:    pgsql-admin@postgresql.org
Subject:    Re: [ADMIN] pg_dump max file size exceeded

At 12:15 AM 3/20/02 -0500, Tom Lane wrote:
>"Fred Moyer" <fred@digicamp.com> writes:
> > ran time pg_dump -c --verbose database > datafile.psql from the command
> line
> > and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.
 any
> > ideas how to exceed that limit?
>
> > redhat 7.2, 2.4.9-31 kernel
>
>[ scratches head... ]  If you were on Solaris or HPUX I'd tell you to
>recompile with 64-bit file offset support enabled.  But I kinda thought
>this was standard equipment on recent Linux versions.  Anyone know the
>magic incantation for large-file support on Linux?

depending on the shell being used, i'd try limit or ulimit

We've seen a case where large file support had to be tweaked in the Veritas
file systems as well.

--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100  ext 242


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)


Re: pg_dump max file size exceeded

От
Rolf Luettecke
Дата:
>"Fred Moyer" <fred@digicamp.com> writes:
> ran time pg_dump -c --verbose database > datafile.psql from the command
> line and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.
> any ideas how to exceed that limit?

Workaround: Pipe output to gzip/bzip2, if compressed file size does not
reach 2 GB limit, or cut output into 2GB-pieces.

Regards
R. Luettecke

Re: pg_dump max file size exceeded

От
Dmitry Morozovsky
Дата:
On Wed, 20 Mar 2002, Rolf Luettecke wrote:

RL> >"Fred Moyer" <fred@digicamp.com> writes:
RL> > ran time pg_dump -c --verbose database > datafile.psql from the command
RL> > line and got a file size limit exceeded.  datafile.psql stopped at 2 gigs.
RL> > any ideas how to exceed that limit?
RL>
RL> Workaround: Pipe output to gzip/bzip2, if compressed file size does not
RL> reach 2 GB limit, or cut output into 2GB-pieces.

... and if it's still does not fit, pipe it further into split(1) ;-)
[though I don't know whether this utility exists in standard Linux
distrib. In BSDs it does.]


Sincerely,
D.Marck                                   [DM5020, DM268-RIPE, DM3-RIPN]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***
------------------------------------------------------------------------