Re: Add LZ4 compression in pg_dump
От | Tom Lane |
---|---|
Тема | Re: Add LZ4 compression in pg_dump |
Дата | |
Msg-id | 25345.1760289877@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Add LZ4 compression in pg_dump (Michael Paquier <michael@paquier.xyz>) |
Ответы |
Re: Add LZ4 compression in pg_dump
|
Список | pgsql-hackers |
[ blast-from-the-past department ] Michael Paquier <michael@paquier.xyz> writes: > At the end I am finishing with the attached. I also saw an overlap > with the addition of --jobs for the directory format vs not using the > option, so I have removed the case where --jobs was not used in the > directory format. (This patch became commit 98fe74218.) I am wondering if you remember why this bit: + # Give coverage for manually compressed blob.toc files during + # restore. + compress_cmd => { + program => $ENV{'GZIP_PROGRAM'}, + args => [ '-f', "$tempdir/compression_gzip_dir/blobs.toc", ], + }, was set up to manually compress blobs.toc but not the main TOC in the toc.dat file. It turns out that Gzip_read is broken for the case of a zero-length read request [1], but we never reach that case unless toc.dat is compressed. We don't cover the getc_func member of the compression stream API, either. I thought for a bit about proposing that we compress toc.dat but not blobs.toc, but that loses coverage in another way: the gets_func API turns out to be used only while reading blobs.toc. So we need to compress both files manually if we want full coverage. I think this change won't lose coverage, because there are other tests in 002_pg_dump.pl that exercise directory format without extra compression of anything. Thoughts? regards, tom lane [1] https://www.postgresql.org/message-id/3686.1760232320%40sss.pgh.pa.us
В списке pgsql-hackers по дате отправления: