Обсуждение: large file limitation

Поиск
Список
Период
Сортировка

large file limitation

От
jeff.brickley@motorola.com (Jeff)
Дата:
I have installed Postgres 7.1.3 on a Solaris 2.8 machine.  When I dump
the database the file is larger than the 2 GB limit.  I checked with
our unix admin and he confirmed that Solaris 2.8 would not support
files larger than 2GB until he made a modification to use large files.
 He made the modification and we verified that the unix system could
handle files larger than 2GB.  I then dumped the database again and
noticed the same situation.  The dump files truncate at the 2GB limit.
 I suppose I need to recompile Postgres now on the system now that it
accepts large files.  Is there any library that I need to point to
manually or some option that I need to pass in the configuration?  How
do I ensure Postgres can handle large files (>2GB)

Thanks

Re: large file limitation

От
Andrew Sullivan
Дата:
On Thu, Jan 10, 2002 at 01:10:35PM -0800, Jeff wrote:
> handle files larger than 2GB.  I then dumped the database again and
> noticed the same situation.  The dump files truncate at the 2GB limit.

We just had the same happen recently.

>  I suppose I need to recompile Postgres now on the system now that it
> accepts large files.

Yes.

> Is there any library that I need to point to manually or some
> option that I need to pass in the configuration?  How do I ensure
> Postgres can handle large files (>2GB)

Yes.  It turns out that gcc (and maybe other C compilers; I don't
know) doesn't turn on the 64-bit offset by default.  You need to add a
CFLAGS setting.  The necessaries can be found with

    CFLAGS="`getconf LFS_CFLAGS`"

(I stole that from the Python guys:
<http://www.python.org/doc/current/lib/posix-large-files.html>).

Note that this will _not_ compile the binary as a 64-bit binary, so
using "file" to check it will still report a 32-bit binary.
Everything I've read about the subject suggests that gcc-compiled
64-bit binaries on Solaris are sort of flakey, so I've not tried it.

Hope this is helpful.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: large file limitation

От
Bill Cunningham
Дата:
Check the user limit (ulimit) for the user your running postgres under.
Its probably causing the problem.

- Bill

Jeff wrote:

>I have installed Postgres 7.1.3 on a Solaris 2.8 machine.  When I dump
>the database the file is larger than the 2 GB limit.  I checked with
>our unix admin and he confirmed that Solaris 2.8 would not support
>files larger than 2GB until he made a modification to use large files.
> He made the modification and we verified that the unix system could
>handle files larger than 2GB.  I then dumped the database again and
>noticed the same situation.  The dump files truncate at the 2GB limit.
> I suppose I need to recompile Postgres now on the system now that it
>accepts large files.  Is there any library that I need to point to
>manually or some option that I need to pass in the configuration?  How
>do I ensure Postgres can handle large files (>2GB)
>
>Thanks
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to majordomo@postgresql.org so that your
>message can get through to the mailing list cleanly
>




Re: large file limitation

От
Brian Hirt
Дата:
Jeff,

Since your problems have to do with backups, a better solution than
reconfiguring your kernel and recompiling postgres might be to split up
the backups into smaller files.

pg_dump mydb | split -b 1000m - backupfile

man backup

--brian

On Fri, 2002-01-18 at 13:03, Bill Cunningham wrote:
> Check the user limit (ulimit) for the user your running postgres under.
> Its probably causing the problem.
>
> - Bill
>
> Jeff wrote:
>
> >I have installed Postgres 7.1.3 on a Solaris 2.8 machine.  When I dump
> >the database the file is larger than the 2 GB limit.  I checked with
> >our unix admin and he confirmed that Solaris 2.8 would not support
> >files larger than 2GB until he made a modification to use large files.
> > He made the modification and we verified that the unix system could
> >handle files larger than 2GB.  I then dumped the database again and
> >noticed the same situation.  The dump files truncate at the 2GB limit.
> > I suppose I need to recompile Postgres now on the system now that it
> >accepts large files.  Is there any library that I need to point to
> >manually or some option that I need to pass in the configuration?  How
> >do I ensure Postgres can handle large files (>2GB)
> >
> >Thanks
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 3: if posting/reading through Usenet, please send an appropriate
> >subscribe-nomail command to majordomo@postgresql.org so that your
> >message can get through to the mailing list cleanly
> >
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org



Re: large file limitation

От
Andrew Sullivan
Дата:
On Fri, Jan 18, 2002 at 02:39:35PM -0500, Andrew Sullivan wrote:

> CFLAGS setting.  The necessaries can be found with
>
>     CFLAGS="`getconf LFS_CFLAGS`"

Tom Lane pointed out to me that I wasn't clear enough: you need to
have that exported _before_ you run ./configure, or it won't help.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: large file limitation

От
Jan Wieck
Дата:
Andrew Sullivan wrote:
> On Thu, Jan 10, 2002 at 01:10:35PM -0800, Jeff wrote:
> > handle files larger than 2GB.  I then dumped the database again and
> > noticed the same situation.  The dump files truncate at the 2GB limit.
>
> We just had the same happen recently.
>
> >  I suppose I need to recompile Postgres now on the system now that it
> > accepts large files.
>
> Yes.

    No.  PostgreSQL is totally fine with that limit, it will just
    segment huge tables into separate files of 1G max each.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


Re: large file limitation

От
Tom Lane
Дата:
Jan Wieck <janwieck@yahoo.com> writes:
>>> I suppose I need to recompile Postgres now on the system now that it
>>> accepts large files.
>>
>> Yes.

>     No.  PostgreSQL is totally fine with that limit, it will just
>     segment huge tables into separate files of 1G max each.

The backend is fine with it, but "pg_dump >outfile" will choke when
it gets past 2Gb of output (at least, that is true on Solaris).

I imagine "pg_dump | split" would do as a workaround, but don't have
a Solaris box handy to verify.

I can envision building 32-bit-compatible stdio packages that don't
choke on large files unless you actually try to do ftell or fseek beyond
the 2G boundary.  Solaris' implementation, however, evidently fails
hard at the boundary.

            regards, tom lane

Re: large file limitation

От
Jan Wieck
Дата:
Tom Lane wrote:
> Jan Wieck <janwieck@yahoo.com> writes:
> >>> I suppose I need to recompile Postgres now on the system now that it
> >>> accepts large files.
> >>
> >> Yes.
>
> >     No.  PostgreSQL is totally fine with that limit, it will just
> >     segment huge tables into separate files of 1G max each.
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).
>
> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.
>
> I can envision building 32-bit-compatible stdio packages that don't
> choke on large files unless you actually try to do ftell or fseek beyond
> the 2G boundary.  Solaris' implementation, however, evidently fails
> hard at the boundary.

    Meaning  what?  That  even  if  he'd  recompile PostgreSQL to
    support large files, the "pg_dump >outfile" would still choke
    ... duh!


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


Re: large file limitation

От
Tom Lane
Дата:
Jan Wieck <janwieck@yahoo.com> writes:
>     Meaning  what?  That  even  if  he'd  recompile PostgreSQL to
>     support large files, the "pg_dump >outfile" would still choke
>     ... duh!

I'm simply reporting what I've heard from dbadmins who are actually
running large installations on Solaris: build pg_dump one way, it
can output more than 2Gb, build it the other way and it can't.
Arguing with experimental facts is generally futile...

            regards, tom lane

Re: large file limitation

От
Adrian Phillips
Дата:
>>>>> "Jeff" == Jeff  <jeff.brickley@motorola.com> writes:

    Jeff> I have installed Postgres 7.1.3 on a Solaris 2.8 machine.
    Jeff> When I dump the database the file is larger than the 2 GB
    Jeff> limit.  I checked with our unix admin and he confirmed that
    Jeff> Solaris 2.8 would not support files larger than 2GB until he
    Jeff> made a modification to use large files.  He made the
    Jeff> modification and we verified that the unix system could
    Jeff> handle files larger than 2GB.  I then dumped the database
    Jeff> again and noticed the same situation.  The dump files
    Jeff> truncate at the 2GB limit.  I suppose I need to recompile
    Jeff> Postgres now on the system now that it accepts large files.
    Jeff> Is there any library that I need to point to manually or
    Jeff> some option that I need to pass in the configuration?  How
    Jeff> do I ensure Postgres can handle large files (>2GB)

I've had the same problem with another type of backup (file system)
aganst an AIX machine and thought the following would work :-

dump <options> | cat > filename

assuming cat could write bigger files. Unfortunately with the little
time I spent trying to get it to work I was unable to do so but I
would of thought it would work in theory.

Sincerely,

Adrian Phillips

--
Your mouse has moved.
Windows NT must be restarted for the change to take effect.
Reboot now?  [OK]

Re: large file limitation

От
Justin Clift
Дата:
Hi Jeff,

Large file support was recently added to the PostgreSQL Installation
Guide for Solaris :

http://techdocs.postgresql.org/installguides.php#solaris

If you run the command :

/bin/getconf LFS_CFLAGS

it will tell you which libraries you need to include.  Pretty much
that's what's been added to the guide.

:)

Regards and best wishes,

Justin Clift


Adrian Phillips wrote:
>
> >>>>> "Jeff" == Jeff  <jeff.brickley@motorola.com> writes:
>
>     Jeff> I have installed Postgres 7.1.3 on a Solaris 2.8 machine.
>     Jeff> When I dump the database the file is larger than the 2 GB
>     Jeff> limit.  I checked with our unix admin and he confirmed that
>     Jeff> Solaris 2.8 would not support files larger than 2GB until he
>     Jeff> made a modification to use large files.  He made the
>     Jeff> modification and we verified that the unix system could
>     Jeff> handle files larger than 2GB.  I then dumped the database
>     Jeff> again and noticed the same situation.  The dump files
>     Jeff> truncate at the 2GB limit.  I suppose I need to recompile
>     Jeff> Postgres now on the system now that it accepts large files.
>     Jeff> Is there any library that I need to point to manually or
>     Jeff> some option that I need to pass in the configuration?  How
>     Jeff> do I ensure Postgres can handle large files (>2GB)
>
> I've had the same problem with another type of backup (file system)
> aganst an AIX machine and thought the following would work :-
>
> dump <options> | cat > filename
>
> assuming cat could write bigger files. Unfortunately with the little
> time I spent trying to get it to work I was unable to do so but I
> would of thought it would work in theory.
>
> Sincerely,
>
> Adrian Phillips
>
> --
> Your mouse has moved.
> Windows NT must be restarted for the change to take effect.
> Reboot now?  [OK]
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
   - Indira Gandhi

Re: large file limitation

От
Andrew Sullivan
Дата:
On Fri, Jan 18, 2002 at 08:51:47PM -0500, Tom Lane wrote:
>
> The backend is fine with it, but "pg_dump >outfile" will choke when
> it gets past 2Gb of output (at least, that is true on Solaris).

Right.  Sorry if I wasn't clear about that; I know that Postgres
itself never writes a file bigger than 1 Gig, but pg_dump and
pg_restore can easily pass that limit.

> I imagine "pg_dump | split" would do as a workaround, but don't have
> a Solaris box handy to verify.

It will.  If you check 'man largefiles' on Solaris (7 anyway; I don't
know about other versions) it will tell you what basic Solaris system
binaries are large file aware.  /usr/bin/split is one of them, as is
/usr/bin/compress.  We are working in a hosted environment, and I
didn't completely trust the hosts not to drop one of the files when
sending them to tape, or I would have used split instead of
recompiling.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: large file limitation

От
Andrew Sullivan
Дата:
On Fri, Jan 18, 2002 at 08:56:06PM -0500, Jan Wieck wrote:

>     Meaning  what?  That  even  if  he'd  recompile PostgreSQL to
>     support large files, the "pg_dump >outfile" would still choke
>     ... duh!

No.  If you recompile it with the CFLAGS setting I sent, it will work
fine.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110