Re: pg_dump's over 2GB

Поиск
Список
Период
Сортировка
От Jeff Hoffmann
Тема Re: pg_dump's over 2GB
Дата
Msg-id 39D4C64F.378F09BB@propertykey.com
обсуждение исходный текст
Ответ на pg_dump's over 2GB  ("Bryan White" <bryan@arcamax.com>)
Ответы Re: pg_dump's over 2GB  ("Ross J. Reedstrom" <reedstrm@rice.edu>)
Список pgsql-general
Bryan White wrote:
>
> I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>

sure, i do it all the time.  unfortunately, i've had it happen a few
times where even gzipping a database dump goes over 2GB, which is a real
PITA since i have to dump some tables individually.  generally, i do
something like
    pg_dump database | gzip > database.pgz
to dump the database and
    gzip -dc database.pgz | psql database
to restore it.  i've always thought that compress should be an option
for pg_dump, but it's really not that much more work to just pipe the
input and output through gzip.

--

Jeff Hoffmann
PropertyKey.com

В списке pgsql-general по дате отправления:

Предыдущее
От: "Adam Lang"
Дата:
Сообщение: Fw: Redhat 7 and PgSQL
Следующее
От: "Ross J. Reedstrom"
Дата:
Сообщение: Re: pg_dump's over 2GB