Re: pg_dump and large files - is this a problem?

Поиск
Список
Период
Сортировка
От Giles Lean
Тема Re: pg_dump and large files - is this a problem?
Дата
Msg-id 13309.1033679729@nemeton.com.au
обсуждение исходный текст
Ответ на Re: pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
Ответы Re: pg_dump and large files - is this a problem?  (Philip Warner <pjw@rhyme.com.au>)
Список pgsql-hackers
Philip Warner writes:

> My limited reading of off_t stuff now suggests that it would be brave to 
> assume it is even a simple 64 bit number (or even 3 32 bit numbers).

What are you reading??  If you find a platform with 64 bit file
offsets that doesn't support 64 bit integral types I will not just be
surprised but amazed.

> One alternative, which I am not terribly fond of, is to have pg_dump
> write multiple files - when we get to 1 or 2GB, we just open another
> file, and record our file positions as a (file number, file
> position) pair. Low tech, but at least we know it would work.

That does avoid the issue completely, of course, and also avoids
problems where a platform might have large file support but a
particular filesystem might or might not.

> Unless anyone knows of a documented way to get 64 bit uint/int file 
> offsets, I don't see we have mush choice.

If you're on a platform that supports large files it will either have
a straightforward 64 bit off_t or else will support the "large files
API" that is common on Unix-like operating systems.

What are you trying to do, exactly?

Regards,

Giles





В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: CVS checkout errors
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: DROP COLUMN misbehaviour with multiple inheritance