Re: Problem w/ dumping huge table and no disk space
От | Joe Conway |
---|---|
Тема | Re: Problem w/ dumping huge table and no disk space |
Дата | |
Msg-id | 010201c137eb$b32d0980$0705a8c0@jecw2k1 обсуждение исходный текст |
Ответ на | Re: Problem w/ dumping huge table and no disk space (Andrew Gould <andrewgould@yahoo.com>) |
Список | pgsql-general |
> Have you tried dumping individual tables separately > until it's all done? > > I've never used to -Z option, so I can't compare its > compression to piping a pg_dump through gzip. > However, this is how I've been doing it: > > pg_dump db_name | gzip -c > db_name.gz > > I have a 2.2 Gb database that gets dumped/compressed > to a 235 Mb file. > > Andrew Another idea which you might try is run pg_dumpall from a different host (with ample space) using the -h and -U options. HTH, Joe Usage: pg_dumpall [ options... ] Options: -c, --clean Clean (drop) schema prior to create -g, --globals-only Only dump global objects, no databases -h, --host=HOSTNAME Server host name -p, --port=PORT Server port number -U, --username=NAME Connect as specified database user -W, --password Force password prompts (should happen automatically) Any extra options will be passed to pg_dump. The dump will be written to the standard output.
В списке pgsql-general по дате отправления: