Re: [GENERAL] 2 gig file size limit
| От | Neil Conway |
|---|---|
| Тема | Re: [GENERAL] 2 gig file size limit |
| Дата | |
| Msg-id | 2585.192.168.40.6.994807025.squirrel@klamath.dyndns.org обсуждение исходный текст |
| Ответ на | 2 gig file size limit (Naomi Walker <nwalker@eldocomp.com>) |
| Список | pgsql-hackers |
(This question was answered several days ago on this list; please check the list archives before posting. I believe it's also in the FAQ.) > If PostgreSQL is run on a system that has a file size limit (2 > gig?), where might cause us to hit the limit? Postgres will never internally use files (e.g. for tables, indexes, etc) larger than 1GB -- at that point, the file is split. However, you might run into problems when you export the data from Pg to another source, such as if you pg_dump the contents of a database > 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the size of the dump. If that's still not enough, you can dump individual tables (with -t) or use 'split' to divide the dump into several files. Cheers, Neil
В списке pgsql-hackers по дате отправления: