Re: a faster compression algorithm for pg_dump

Поиск
Список
Период
Сортировка
От daveg
Тема Re: a faster compression algorithm for pg_dump
Дата
Msg-id 20100415005447.GI23641@sonic.net
обсуждение исходный текст
Ответ на Re: a faster compression algorithm for pg_dump  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Tue, Apr 13, 2010 at 03:03:58PM -0400, Tom Lane wrote:
> Joachim Wieland <joe@mcknight.de> writes:
> > If we still cannot do this, then what I am asking is: What does the
> > project need to be able to at least link against such a compression
> > algorithm?
> 
> Well, what we *really* need is a convincing argument that it's worth
> taking some risk for.  I find that not obvious.  You can pipe the output
> of pg_dump into your-choice-of-compressor, for example, and that gets
> you the ability to spread the work across multiple CPUs in addition to
> eliminating legal risk to the PG project.  And in any case the general
> impression seems to be that the main dump-speed bottleneck is on the
> backend side not in pg_dump's compression.

My client uses pg_dump -Fc and produces about 700GB of compressed postgresql
dump nightly from multiple hosts. They also depend on being able to read and
filter the dump catalog. A faster compression algorithm would be a huge
benefit for dealing with this volume.

-dg

-- 
David Gould       daveg@sonic.net      510 536 1443    510 282 0869
If simplicity worked, the world would be overrun with insects.


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Win32 timezone matching
Следующее
От: Bruce Momjian
Дата:
Сообщение: Rogue TODO list created