pg_dump 2 gig file size limit on ext3

Поиск
Список
Период
Сортировка
От Jeremiah Jahn
Тема pg_dump 2 gig file size limit on ext3
Дата
Msg-id 1039154784.7905.138.camel@bluejay.goodinassociates.com
обсуждение исходный текст
Ответы Re: pg_dump 2 gig file size limit on ext3  (Tommi Maekitalo <t.maekitalo@epgmbh.de>)
Список pgsql-general
I have the strangest thing happening. I can't finish a pg_dump of my db.
It says that I have reached the maximum file size @ 2GB. I'm running
this on a system with redhat 8.0 because the problem existed on 7.3 as
well on an ext3 raid array. the size of the db is +/- 4gig. I'm using
7.2.2, I tried 7.2.1 earlier today and got the same problem. I don't
think I can really do the split the data in different tables since I use
large objects.  Any1 out there have any ideas as why this is happening.
I took the 2 gig dump and recopied it into itself just for to see what
would happen and the resulting 4.2 gig file was fine. This really seems
to be a problem with pg_dump. I've used pg_dump with -Ft just crashes
with some sort of "filed to write error tried to write 221 of 256" or
something like that. the resulting size though is about 1.2 gig. -Fc
stops at the 2 gig limit. Do I need to recompile this with some 64bit
setting or something..? I'm currently using the default redhat build.

thanx for any ideas,
-jj-
--
I hope you're not pretending to be evil while secretly being good.
That would be dishonest.


В списке pgsql-general по дате отправления:

Предыдущее
От: Joel Burton
Дата:
Сообщение: Re: Newbee question "Types"
Следующее
От: Andrew
Дата:
Сообщение: Asking for password at startup