Обсуждение: pg_dump far too slow

Поиск
Список
Период
Сортировка

pg_dump far too slow

От
David Newall
Дата:
Evening all,

Maiden post to this list.  I've a performance problem for which I'm uncharacteristically in need of good advice.

I have a read-mostly database using 51GB on an ext3 filesystem on a server running Ubuntu 9.04  and PG 8.3.  Forty hours ago I started a plain-format dump, compressed with -Z9, and it is still running, having produced 32GB of an expected 40 - 45GB of compressed output.  CPU load is 100% on the core executing pg_dump, and negligible on all others cores.  The system is read-mostly, and largely idle.  The exact invocation was:

    nohup time pg_dump -f database.dmp -Z9 database

I presumed pg_dump was CPU-bound because of gzip compression, but a test I ran makes that seem unlikely: Copying the database files to a USB hard drive (cp -r /var/lib/postgresql/8.3/main /mnt) took 25 minutes; and gzip-compressing the first first 500MB of the dumpfile (dd if=database.dmp bs=64k count=16000 | time gzip -9 > dd.gz) took one minute and 15 seconds; to gzip the complete 51GB set of files should take no more than 90 minutes.

The database is unremarkable except for one table, the biggest, which contains a bytea column, and which pg_dump has been outputting for at least 39 hours.  That table has 276,292 rows, in which the bytea for 140,695 contains PDFs totalling 32,791MB, and the bytea for the remaining 135,597 rows contains PostScript totalling 602MB.  I think I've never done a full vacuum; only ever auto-vacuum; however I did copy the table to new, deleted the old, and renamed, which I expect is effectively equivalent for it; which is described by the following schema:

             Table "database.bigtable"   Column    |       Type        |     Modifiers    
--------------+-------------------+--------------------headerid     | integer           | not nullmember       | numeric(10,0)     | not nullpostcode     | character varying |bsp          | character varying |details      | bytea             | not nullmembertypeid | integer           | not null default 0
Indexes:   "bigtable_pkey" PRIMARY KEY, btree (headerid, member)   "bigtable_member" btree (member)
Foreign-key constraints:   "bigtable_headerid_fkey" FOREIGN KEY (headerid) REFERENCES header(headerid)

The following describes the application environment:
  • PostgreSQL 8.3.8 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Ubuntu 4.3.3-5ubuntu4) 4.3.3
  • pg_dump (PostgreSQL) 8.3.8
  • Ubuntu 9.04
  • Linux server 2.6.28-13-server #45-Ubuntu SMP Tue Jun 30 20:51:10 UTC 2009 i686 GNU/Linux
  • Intel(R) Xeon(R) CPU E5430  @ 2.66GHz (4 core)
  • RAM 2GB
  • 2 SATA, 7200rpm disks with hardware  RAID-1 (IBM ServeRAID)

My question is, what's going on?

Thanks,

David

Re: pg_dump far too slow

От
Tom Lane
Дата:
David Newall <postgresql@davidnewall.com> writes:
> [ very slow pg_dump of table with large bytea data ]

Did you look at "vmstat 1" output to see whether the system was under
any large I/O load?

Dumping large bytea data is known to be slow for a couple of reasons:

1. The traditional text output format for bytea is a bit poorly chosen.
It's not especially cheap to generate and it interacts very badly with
COPY processing, since it tends to contain lots of backslashes which
then have to be escaped by COPY.

2. Pulling the data from the out-of-line "toast" table can be expensive
if it ends up seeking all over the disk to do it.  This will show up as
a lot of seeking and I/O wait, rather than CPU expense.

Since you mention having recently recopied the table into a new table,
I would guess that the toast table is reasonably well-ordered and so
effect #2 shouldn't be a big issue.  But it's a good idea to check.

PG 9.0 is changing the default bytea output format to hex, in part
to solve problem #1.  That doesn't help you in an 8.3 installation
of course.  If you're desperate you could consider excluding this
table from your pg_dumps and backing it up separately via COPY BINARY.
The PITA factor of that might be more than you can stand though.
Offhand I can't think of any other way to ameliorate the problem
in 8.3.

            regards, tom lane

Re: pg_dump far too slow

От
Robert Haas
Дата:
On Sun, Mar 14, 2010 at 4:01 AM, David Newall
<postgresql@davidnewall.com> wrote:
> an expected 40 - 45GB of compressed output.  CPU load is 100% on the core
> executing pg_dump, and negligible on all others cores.  The system is
> read-mostly, and largely idle.  The exact invocation was:
>
>     nohup time pg_dump -f database.dmp -Z9 database

Can you connect a few times with gdb and do "bt" to get a backtrace?
That might shed some light on where it's spending all of its time.

...Robert

Re: pg_dump far too slow

От
Dave Crooke
Дата:
As a fellow PG newbie, some thoughts / ideas ....

1. What is the prupose of the dump (backup, migration, ETL, etc.)? Why plain? Unless you have a need to load this into a different brand of database at short notice, I'd use native format.

2. If you goal is indeed to get the data into another DB, use an app which can do a binary-to-binary transfer, e.g. a little homegrown tool in Java that connects to both with JDBC, or a data migration ETL tool.

3. If pg_dump is still CPU bound, then don't get pg_dump to compress the archive, instead do  pg_dump -F c -Z 0  ... | gzip >foo.dmp.gz ... this way the compression runs on a different core from the formatting

4. Don't use -Z9, the return on investment isn't worth it (esp. if you are CPU bound), use the default GZIP compression instead, or if you need to minimize storage, experiment with higher levels until the CPU running GZIP is close to, but not totally, maxed out.

5. I see no other drives mentioned ... is your dump being written to a partition on the same RAID-1 pair that PG is running on? Spring for another drive to avoid the seek contention ... even if you were to stream the dump to a temporary filesystem on a single commodity consumer drive ($99 for a 1.5TB SATA-300 spindle) with no RAID, you could then copy it back to the RAID set after pg_dump completes, and I'd give you good odds it'd be a quicker end to end process.

Cheers
Dave

On Sun, Mar 14, 2010 at 3:01 AM, David Newall <postgresql@davidnewall.com> wrote:
Evening all,

Maiden post to this list.  I've a performance problem for which I'm uncharacteristically in need of good advice.

I have a read-mostly database using 51GB on an ext3 filesystem on a server running Ubuntu 9.04  and PG 8.3.  Forty hours ago I started a plain-format dump, compressed with -Z9, and it is still running, having produced 32GB of an expected 40 - 45GB of compressed output. 
<snip>

Re: pg_dump far too slow

От
Matthew Wakeling
Дата:
On Sun, 14 Mar 2010, David Newall wrote:
>   nohup time pg_dump -f database.dmp -Z9 database
>
> I presumed pg_dump was CPU-bound because of gzip compression, but a test I
> ran makes that seem unlikely...

There was some discussion about this a few months ago at
http://archives.postgresql.org/pgsql-performance/2009-07/msg00348.php

It seems that getting pg_dump to do the compression is a fair amount
slower than piping the plain format dump straight through gzip. You get a
bit more parallelism that way too.

Matthew


--
 I'm always interested when [cold callers] try to flog conservatories.
 Anyone who can actually attach a conservatory to a fourth floor flat
 stands a marginally better than average chance of winning my custom.
 (Seen on Usenet)

Re: pg_dump far too slow

От
David Newall
Дата:
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.

The first interesting thing was that piping (uncompressed) pg_dump into
gzip, instead of using pg_dump's internal compressor, does bring a lot
of extra parallelism into play.  (Thank you, Matthew Wakeling.)  I
observed gzip using 100% CPU, as expected, and also two, count them, two
postgres processes collecting data, each consuming a further 80% CPU.
It seemed to me that Postgres was starting and stopping these to match
the capacity of the consumer (i.e. pg_dump and gzip.)  Very nice.
Unfortunately one of these processes dropped eventually, and, according
to top, the only non-idle process running was gzip (100%.)  Obviously
there were postgress and pg_dump processes, too, but they were throttled
by gzip's rate of output and effectively idle (less than 1% CPU).  That
is also interesting.  The final output from gzip was being produced at
the rate of about 0.5MB/second, which seems almost unbelievably slow.

I next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the
complete 34GB file in 30 minutes (a good result.)  I then compressed
that with gzip, which took an hour and reduced the file to 32GB (hardly
worth the effort) for a total run time of 90 minutes.  In that instance,
gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0
to see how quickly that would dump the file.  I had the idea that I'd go
on to see how quickly gzip would compress it, but unfortunately it
filled my disk before finishing (87GB at that point), so there's
something worth knowing: pg_dump's output for binary data is very much
less compact than COPY WITH BINARY; all those backslashes, as Tom
pointed out.  For the aforementioned reason, I didn't get to see how
gzip would perform.  For the record, pg_dump with no compression
produced output at the rate of 26MB/second; a rather meaningless number
given the 200%+ expansion of final output.

I am now confident the performance problem is from gzip, not Postgres
and wonder if I should read up on gzip to find why it would work so
slowly on a pure text stream, albeit a representation of PDF which
intrinsically is fairly compressed.  Given the spectacular job that
postgres did in adjusting it's rate of output to match the consumer
process, I did wonder if there might have been a tragic interaction
between postgres and gzip; perhaps postgres limits its rate of output to
match gzip; and gzip tries to compress what's available, that being only
a few bytes; and perhaps that might be so inefficient that it hogs the
CPU; but it don't think that likely.  I had a peek at gzip's source
(surprisingly readable) and on first blush it does seem that unfortunate
input could result in only a few bytes being written each time through
the loop, meaning only a few more bytes could be read in.

Just to complete the report, I created a child table to hold the PDF's,
which are static, and took a dump of just that table, and adjusted my
backup command to exclude it.  Total size of compressed back sans PDFs
circa 7MB taking around 30 seconds.

Re: pg_dump far too slow

От
Craig Ringer
Дата:
On 21/03/2010 9:17 PM, David Newall wrote:
> Thanks for all of the suggestions, guys, which gave me some pointers on
> new directions to look, and I learned some interesting things.
>

> Unfortunately one of these processes dropped eventually, and, according
> to top, the only non-idle process running was gzip (100%.) Obviously
> there were postgress and pg_dump processes, too, but they were throttled
> by gzip's rate of output and effectively idle (less than 1% CPU). That
> is also interesting. The final output from gzip was being produced at
> the rate of about 0.5MB/second, which seems almost unbelievably slow.

CPU isn't the only measure of interest here.

If pg_dump and the postgres backend it's using are doing simple work
such as reading linear data from disk, they won't show much CPU activity
even though they might be running full-tilt. They'll be limited by disk
I/O or other non-CPU resources.

> and wonder if I should read up on gzip to find why it would work so
> slowly on a pure text stream, albeit a representation of PDF which
> intrinsically is fairly compressed.

In fact, PDF uses deflate compression, the same algorithm used for gzip.
Gzip-compressing PDF is almost completely pointless - all you're doing
is compressing some of the document structure, not the actual content
streams. With PDF 1.5 and above using object and xref streams, you might
not even be doing that, instead only compressing the header and trailer
dictionary, which are probably in the order of a few hundred bytes.

Compressing PDF documents is generally a waste of time.

--
Craig Ringer

Re: pg_dump far too slow

От
Dave Crooke
Дата:
One more from me ....

If you think that the pipe to GZIP may be causing pg_dump to stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously poll stdin and write stdout. I wrote one in Perl when I used to do a lot of digital video hacking, and it helped with chaining together tools like mplayer and mpeg.

However, my money says that Tom's point about it being (disk) I/O bound is correct :-)

Cheers
Dave

On Sun, Mar 21, 2010 at 8:17 AM, David Newall <postgresql@davidnewall.com> wrote:
Thanks for all of the suggestions, guys, which gave me some pointers on new directions to look, and I learned some interesting things.

The first interesting thing was that piping (uncompressed) pg_dump into gzip, instead of using pg_dump's internal compressor, does bring a lot of extra parallelism into play.  (Thank you, Matthew Wakeling.)  I observed gzip using 100% CPU, as expected, and also two, count them, two postgres processes collecting data, each consuming a further 80% CPU.  It seemed to me that Postgres was starting and stopping these to match the capacity of the consumer (i.e. pg_dump and gzip.)  Very nice.  Unfortunately one of these processes dropped eventually, and, according to top, the only non-idle process running was gzip (100%.)  Obviously there were postgress and pg_dump processes, too, but they were throttled by gzip's rate of output and effectively idle (less than 1% CPU).  That is also interesting.  The final output from gzip was being produced at the rate of about 0.5MB/second, which seems almost unbelievably slow.

I next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the complete 34GB file in 30 minutes (a good result.)  I then compressed that with gzip, which took an hour and reduced the file to 32GB (hardly worth the effort) for a total run time of 90 minutes.  In that instance, gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly that would dump the file.  I had the idea that I'd go on to see how quickly gzip would compress it, but unfortunately it filled my disk before finishing (87GB at that point), so there's something worth knowing: pg_dump's output for binary data is very much less compact than COPY WITH BINARY; all those backslashes, as Tom pointed out.  For the aforementioned reason, I didn't get to see how gzip would perform.  For the record, pg_dump with no compression produced output at the rate of 26MB/second; a rather meaningless number given the 200%+ expansion of final output.

I am now confident the performance problem is from gzip, not Postgres and wonder if I should read up on gzip to find why it would work so slowly on a pure text stream, albeit a representation of PDF which intrinsically is fairly compressed.  Given the spectacular job that postgres did in adjusting it's rate of output to match the consumer process, I did wonder if there might have been a tragic interaction between postgres and gzip; perhaps postgres limits its rate of output to match gzip; and gzip tries to compress what's available, that being only a few bytes; and perhaps that might be so inefficient that it hogs the CPU; but it don't think that likely.  I had a peek at gzip's source (surprisingly readable) and on first blush it does seem that unfortunate input could result in only a few bytes being written each time through the loop, meaning only a few more bytes could be read in.

Just to complete the report, I created a child table to hold the PDF's, which are static, and took a dump of just that table, and adjusted my backup command to exclude it.  Total size of compressed back sans PDFs circa 7MB taking around 30 seconds.

Re: pg_dump far too slow

От
Tom Lane
Дата:
Craig Ringer <craig@postnewspapers.com.au> writes:
> On 21/03/2010 9:17 PM, David Newall wrote:
>> and wonder if I should read up on gzip to find why it would work so
>> slowly on a pure text stream, albeit a representation of PDF which
>> intrinsically is fairly compressed.

> In fact, PDF uses deflate compression, the same algorithm used for gzip.
> Gzip-compressing PDF is almost completely pointless -

Yeah.  I would bet that the reason for the slow throughput is that gzip
is fruitlessly searching for compressible sequences.  It won't find many.

            regards, tom lane

Re: pg_dump far too slow

От
David Newall
Дата:
Tom Lane wrote:
> I would bet that the reason for the slow throughput is that gzip
> is fruitlessly searching for compressible sequences.  It won't find many.
>


Indeed, I didn't expect much reduction in size, but I also didn't expect
a four-order of magnitude increase in run-time (i.e. output at
10MB/second going down to 500KB/second), particularly as my estimate was
based on gzipping a previously gzipped file.  I think it's probably
pathological data, as it were.  Might even be of interest to gzip's
maintainers.

Re: pg_dump far too slow

От
Bob Lunney
Дата:
If you have a multi-processor machine (more than 2) you could look into pigz, which is a parallelized implementation of gzip. I gotten dramatic reductions in wall time using it to zip dump files.  The compressed file is readable by ungzip.

Bob Lunney

From: Dave Crooke <dcrooke@gmail.com>
Subject: Re: [PERFORM] pg_dump far too slow
To: "David Newall" <postgresql@davidnewall.com>
Cc: "Tom Lane" <tgl@sss.pgh.pa.us>, pgsql-performance@postgresql.org, robertmhaas@gmail.com
Date: Sunday, March 21, 2010, 10:33 AM

One more from me ....

If you think that the pipe to GZIP may be causing pg_dump to stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously poll stdin and write stdout. I wrote one in Perl when I used to do a lot of digital video hacking, and it helped with chaining together tools like mplayer and mpeg.

However, my money says that Tom's point about it being (disk) I/O bound is correct :-)

Cheers
Dave

On Sun, Mar 21, 2010 at 8:17 AM, David Newall <postgresql@davidnewall.com> wrote:
Thanks for all of the suggestions, guys, which gave me some pointers on new directions to look, and I learned some interesting things.

The first interesting thing was that piping (uncompressed) pg_dump into gzip, instead of using pg_dump's internal compressor, does bring a lot of extra parallelism into play.  (Thank you, Matthew Wakeling.)  I observed gzip using 100% CPU, as expected, and also two, count them, two postgres processes collecting data, each consuming a further 80% CPU.  It seemed to me that Postgres was starting and stopping these to match the capacity of the consumer (i.e. pg_dump and gzip.)  Very nice.  Unfortunately one of these processes dropped eventually, and, according to top, the only non-idle process running was gzip (100%.)  Obviously there were postgress and pg_dump processes, too, but they were throttled by gzip's rate of output and effectively idle (less than 1% CPU).  That is also interesting.  The final output from gzip was being produced at the rate of about 0.5MB/second, which seems almost unbelievably slow.

I next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the complete 34GB file in 30 minutes (a good result.)  I then compressed that with gzip, which took an hour and reduced the file to 32GB (hardly worth the effort) for a total run time of 90 minutes.  In that instance, gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly that would dump the file.  I had the idea that I'd go on to see how quickly gzip would compress it, but unfortunately it filled my disk before finishing (87GB at that point), so there's something worth knowing: pg_dump's output for binary data is very much less compact than COPY WITH BINARY; all those backslashes, as Tom pointed out.  For the aforementioned reason, I didn't get to see how gzip would perform.  For the record, pg_dump with no compression produced output at the rate of 26MB/second; a rather meaningless number given the 200%+ expansion of final output.

I am now confident the performance problem is from gzip, not Postgres and wonder if I should read up on gzip to find why it would work so slowly on a pure text stream, albeit a representation of PDF which intrinsically is fairly compressed.  Given the spectacular job that postgres did in adjusting it's rate of output to match the consumer process, I did wonder if there might have been a tragic interaction between postgres and gzip; perhaps postgres limits its rate of output to match gzip; and gzip tries to compress what's available, that being only a few bytes; and perhaps that might be so inefficient that it hogs the CPU; but it don't think that likely.  I had a peek at gzip's source (surprisingly readable) and on first blush it does seem that unfortunate input could result in only a few bytes being written each time through the loop, meaning only a few more bytes could be read in.

Just to complete the report, I created a child table to hold the PDF's, which are static, and took a dump of just that table, and adjusted my backup command to exclude it.  Total size of compressed back sans PDFs circa 7MB taking around 30 seconds.


Re: pg_dump far too slow

От
Scott Carey
Дата:
On Mar 21, 2010, at 8:50 AM, David Newall wrote:

> Tom Lane wrote:
>> I would bet that the reason for the slow throughput is that gzip
>> is fruitlessly searching for compressible sequences.  It won't find many.
>>
>
>
> Indeed, I didn't expect much reduction in size, but I also didn't expect
> a four-order of magnitude increase in run-time (i.e. output at
> 10MB/second going down to 500KB/second), particularly as my estimate was
> based on gzipping a previously gzipped file.  I think it's probably
> pathological data, as it were.  Might even be of interest to gzip's
> maintainers.
>

gzip -9 is known to be very very inefficient.  It hardly ever is more compact than -7, and often 2x slower or worse.
Its almost never worth it to use unless you don't care how long the compression time is.

Try -Z1

at level 1 compression the output will often be good enough compression at rather fast speeds.  It is about 6x as fast
asgzip -9 and typically creates result files 10% larger. 

For some compression/decompression speed benchmarks see:

http://tukaani.org/lzma/benchmarks.html



> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance