Scott Carey <scott@richrelevance.com> writes:
> I get very different (contradictory) behavior. Server with fast RAID, 32GB
> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
> 8.3.6
> No disk wait time during any test. One test beforehand was used to prime
> the disk cache.
> 100% CPU in the below means one core fully used. 800% means the system is
> fully loaded.
> pg_dump > file (on a subset of the DB with lots of tables with small
> tuples)
> 6m 27s, 4.9GB; 12.9MB/sec
> 50% CPU in postgres, 50% CPU in pg_dump
> pg_dump -Fc > file.gz
> 9m6s, output is 768M (6.53x compression); 9.18MB/sec
> 30% CPU in postgres, 70% CPU in pg_dump
> pg_dump | gzip > file.2.gz
> 6m22s, 13MB/sec.
> 50% CPU in postgres, 50% Cpu in pg_dump, 50% cpu in gzip
I don't see anything very contradictory here. What you're demonstrating
is that it's nice to be able to throw a third CPU at the compression
part of the problem. That's likely to remain true if we shift to a
different compression algorithm. I suspect if you substituted lzo for
gzip in the third case, the picture wouldn't change very much.
regards, tom lane