Обсуждение: psql error: file to large!

Поиск
Список
Период
Сортировка

psql error: file to large!

От
Roy Souther
Дата:
This is a follow up to my previous post "Upgrade problems"
I compiled from source and have the same problem.

---------------------- My original post. ----------------------------------
I have a large database. I had PG 7.0. I did the data dump and got a 2.5 GB
file. But when I tried to run psql with the new 7.1.2 I get a file to large
error and nothing happens psql exits.

Is there a limit with psql? I am hoping it was compiled wrong. I insted from
the binary RPMS for Mandrake 8.0.

I will try compiling the source and see if that fixes the problem.
Is there some way I could dump to two files?
Do a dump on individual tables? pg_dumpall just does an SQL "COPY
<table_name>  TO stdio" so is there and easy what I could write a bash script
to do a dump like this for each table?  Then I could restore them one table
at a time right?

I have no large object in the tables. Every thing should go to text easy. How
would I get it back?
--
Roy Souther <roy@silicontao.com>

01100010 10101110 11000110 11010110 00000100 10110010 10010110 11000110
01001110 11110110 11001110 00010110 10010110 00101110 10000100 10000100


Re: psql error: file to large!

От
Weiping He
Дата:
Roy Souther wrote:

> This is a follow up to my previous post "Upgrade problems"
> I compiled from source and have the same problem.
>
> ---------------------- My original post. ----------------------------------
> I have a large database. I had PG 7.0. I did the data dump and got a 2.5 GB
> file. But when I tried to run psql with the new 7.1.2 I get a file to large
> error and nothing happens psql exits.
>
> Is there a limit with psql? I am hoping it was compiled wrong. I insted from
> the binary RPMS for Mandrake 8.0.
>
> I will try compiling the source and see if that fixes the problem.
> Is there some way I could dump to two files?
> Do a dump on individual tables? pg_dumpall just does an SQL "COPY
> <table_name>  TO stdio" so is there and easy what I could write a bash script
> to do a dump like this for each table?  Then I could restore them one table
> at a time right?
>

The single file size in ext2 can't exceed 2G,
so I think you should use 'gzip' to compress the dump,
or use 'split' to split the large file, or use 'pg_dump  -t table_name'
to dump only one table.

see:

http://www.postgresql.org/idocs/index.php?backup.html#BACKUP-DUMP-LARGE

would help.

    regards    laser



Re: psql error: file to large!

От
Roy Souther
Дата:
ext2 does support files larger then 2GB. As I said in the email I created the
2.5 GB file on the same system. Intel base Linux systems had a bug in older
version of Linux that caused a 2GB limit but that problem was fixed and my
system does not have the bug.

I will check out the link.
--
Roy Souther <roy@silicontao.com>

01100010 10101110 11000110 11010110 00000100 10110010 10010110 11000110
01001110 11110110 11001110 00010110 10010110 00101110 10000100 10000100