Re: question

Поиск
Список
Период
Сортировка
От Francisco Olarte
Тема Re: question
Дата
Msg-id CA+bJJbx1J_sQ=YW7o1phO19x-2AkWgJkVo2JR_gdAvru0CPPXg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: question  (Guillaume Lelarge <guillaume@lelarge.info>)
Ответы Re: question
Список pgsql-general
On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:
> 2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com>:
>> On 10/15/2015 01:35 PM, anj patnaik wrote:
...
>>> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>>> Are there any other options for large tables to run faster and occupy
>>> less disk space?
>> Yes, do not double compress. -Fc already compresses the file.
> Right. But I'd say "use custom format but do not compress with pg_dump". Use
> the -Z0 option to disable compression, and use an external multi-threaded
> tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV  ).


Francisco Olarte.


В списке pgsql-general по дате отправления:

Предыдущее
От: Karsten Hilbert
Дата:
Сообщение: Re: ID column naming convention
Следующее
От: Albe Laurenz
Дата:
Сообщение: Re: pgpool ssl handshake failure