Обсуждение: pg_dump slow with bytea data

Поиск
Список
Период
Сортировка

pg_dump slow with bytea data

От
"chris r."
Дата:
Dear list,

As discussed extensively in the past [1], pg_dump tends to be slow for
tables that contain bytea columns with large contents. Starting with
postgres version 8.5 the COPY format of bytea was changed from escape to
hex [1], giving ~50% performance boost.

However, we experience heavy problems during our weekly backup of our
database recently. We suspect the reason for this is that we changed
some columns from text with base64-encoded binary stuff to bytea
columns. This change affected a large fraction of the database (~400
GB). Note that we ran VACUUM FULL on the tables affected.

After this change our backup procedure heavily slowed down. Whereas it
took about 8 hours before the change, pg_dump is still busy with the
first table (keeping roughly 50GB) after 12 hours of backup. If I
approximate the time to complete the backup based on this, the backup
procedure would require factor 10 the time it required before the
change. The command we run is simply:  pg_dump -f <outputfile> -F c <db>

The main reason for this immense slow-down was identified in [1] as the
conversion of bytea into a compatible format (i.e. hex). However, given
the size of the db, a factor 10 makes backups practically infeasible.

We do not see any good solution to our problem except COPYing all data
in BINARY format. We understand there is a tough trade-off between
backup portability and backup efficiency here. As Bernd mentioned in
[1], however, not in all cases portability is required - particularly
not in ours.

A switch for binary output in pg_dump, or some alternative way to export
data *consistently* in binary format would be ideal for us, and probably
some others storing bytea data. Or do you see an alternative way how we
could get around this issue? Obviously, having no backup or deleting the
binary stuff from the database are no serious options.

Thanks for any discussion input in advance,
Chris

Re: pg_dump slow with bytea data

От
"chris r."
Дата:
> As discussed extensively in the past [1]
Argh, forgot to add the reference:

[1]: http://archives.postgresql.org/pgsql-hackers/2009-05/msg00192.php

Chris

Re: pg_dump slow with bytea data

От
Merlin Moncure
Дата:
On Wed, Mar 2, 2011 at 2:35 AM, chris r. <chricki@gmx.net> wrote:
> Dear list,
>
> As discussed extensively in the past [1], pg_dump tends to be slow for
> tables that contain bytea columns with large contents. Starting with
> postgres version 8.5 the COPY format of bytea was changed from escape to
> hex [1], giving ~50% performance boost.
>
> However, we experience heavy problems during our weekly backup of our
> database recently. We suspect the reason for this is that we changed
> some columns from text with base64-encoded binary stuff to bytea
> columns. This change affected a large fraction of the database (~400
> GB). Note that we ran VACUUM FULL on the tables affected.
>
> After this change our backup procedure heavily slowed down. Whereas it
> took about 8 hours before the change, pg_dump is still busy with the
> first table (keeping roughly 50GB) after 12 hours of backup. If I
> approximate the time to complete the backup based on this, the backup
> procedure would require factor 10 the time it required before the
> change. The command we run is simply:  pg_dump -f <outputfile> -F c <db>
>
> The main reason for this immense slow-down was identified in [1] as the
> conversion of bytea into a compatible format (i.e. hex). However, given
> the size of the db, a factor 10 makes backups practically infeasible.

hm.  where exactly is all this time getting spent?  Are you i/o bound?
cpu bound? Is there any compression going on? Maybe this is a
performance issue inside pg_dump itself, not necessarily a text/binary
issue (i have a hard time believing going from b64->hex is 10x slower
on format basis alone).  Can you post times comparing manual COPY via
text, manual COPY via binary, and pg_dump -F c?

merlin

Re: pg_dump slow with bytea data

От
Alban Hertroys
Дата:
On 2 Mar 2011, at 9:35, chris r. wrote:

> GB). Note that we ran VACUUM FULL on the tables affected.


Did you also REINDEX them?

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4d6e8542235882633876383!



Index question

От
Michael Black
Дата:
Ok. I have been working with databases a few years but my first real venture in to PostgreSql.  I just want a plain simple index regardless if there are duplicates or not.  How do I accomplish this in PostgreSql?

Michael

Re: *****SPAM***** Index question

От
Scott Ribe
Дата:
On Mar 2, 2011, at 11:31 AM, Michael Black wrote:

> Ok. I have been working with databases a few years but my first real venture in to PostgreSql.  I just want a plain
simpleindex regardless if there are duplicates or not.  How do I accomplish this in PostgreSql? 

Same as any other SQL database: create index foobaridx on foo(bar)...

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: Index question

От
Andrew Sullivan
Дата:
On Wed, Mar 02, 2011 at 06:31:57PM +0000, Michael Black wrote:
>
> Ok. I have been working with databases a few years but my first real venture in to PostgreSql.  I just want a plain
simpleindex regardless if there are duplicates or not.  How do I accomplish this in PostgreSql? 
>

CREATE INDEX?

A

--
Andrew Sullivan
ajs@crankycanuck.ca

Re: Index question

От
Scott Ribe
Дата:
On Mar 2, 2011, at 11:43 AM, Michael Black wrote:

> Thanks Scott.  I just did not see the options in the PGAdmin III nor in the doc at

You may want to bookmark this:

<http://www.postgresql.org/docs/9.0/static/sql-commands.html>

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: Index question

От
"Joshua D. Drake"
Дата:
On Wed, 2011-03-02 at 13:45 -0500, Andrew Sullivan wrote:
> On Wed, Mar 02, 2011 at 06:31:57PM +0000, Michael Black wrote:
> >
> > Ok. I have been working with databases a few years but my first real venture in to PostgreSql.  I just want a plain
simpleindex regardless if there are duplicates or not.  How do I accomplish this in PostgreSql? 
> >
>
> CREATE INDEX?

Perhaps this would be useful:

http://www.postgresql.org/docs/9.0/static/index.html

And specifically:

http://www.postgresql.org/docs/9.0/static/sql-commands.html

JD


>
> A
>
> --
> Andrew Sullivan
> ajs@crankycanuck.ca
>

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


Grant question

От
Michael Black
Дата:
Ok.  What am I missing here?  B_USER is a defined Group Role

CREATE ROLE "B_USER"
  NOSUPERUSER NOINHERIT NOCREATEDB NOCREATEROLE;


GRANT SELECT PRIVILEGES
ON b.config_itm
TO ROLE B_USER;

Nets this ---------------------------------------

ERROR:  syntax error at or near "B_USER"
LINE 3: TO ROLE B_USER;
                ^

********** Error **********

ERROR: syntax error at or near "B_USER"
SQL state: 42601
Character: 42

Re: Index question

От
Michael Black
Дата:
Thank you for the links.

> Subject: Re: [GENERAL] Index question
> From: jd@commandprompt.com
> To: ajs@crankycanuck.ca
> CC: pgsql-general@postgresql.org
> Date: Wed, 2 Mar 2011 11:05:58 -0800
>
> On Wed, 2011-03-02 at 13:45 -0500, Andrew Sullivan wrote:
> > On Wed, Mar 02, 2011 at 06:31:57PM +0000, Michael Black wrote:
> > >
> > > Ok. I have been working with databases a few years but my first real venture in to PostgreSql. I just want a plain simple index regardless if there are duplicates or not. How do I accomplish this in PostgreSql?
> > >
> >
> > CREATE INDEX?
>
> Perhaps this would be useful:
>
> http://www.postgresql.org/docs/9.0/static/index.html
>
> And specifically:
>
> http://www.postgresql.org/docs/9.0/static/sql-commands.html
>
> JD
>
>
> >
> > A
> >
> > --
> > Andrew Sullivan
> > ajs@crankycanuck.ca
> >
>
> --
> PostgreSQL.org Major Contributor
> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
> Consulting, Training, Support, Custom Development, Engineering
> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

Re: Grant question

От
Bill Moran
Дата:
In response to Michael Black <michaelblack75052@hotmail.com>:
>
> Ok.  What am I missing here?  B_USER is a defined Group Role
>
>
> CREATE ROLE "B_USER"
>
>   NOSUPERUSER NOINHERIT NOCREATEDB NOCREATEROLE;
>
>
> GRANT SELECT PRIVILEGES
>
> ON b.config_itm
>
> TO ROLE B_USER;
>
> Nets this ---------------------------------------
>
> ERROR:  syntax error at or near "B_USER"
>
> LINE 3: TO ROLE B_USER;
>
>                 ^
>
>
>
> ********** Error **********
>
>
>
> ERROR: syntax error at or near "B_USER"
>
> SQL state: 42601
>
> Character: 42

You're missing case folding.  B_USER != b_user, and when you don't put
quotes around it, the case is folded to lower case.

My personal experience advises against using case-sensitive anything in
an SQL database, but if you're going to do it, you have to do it
consistently.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Grant question

От
S G
Дата:
On Wed, Mar 2, 2011 at 2:30 PM, Michael Black
<michaelblack75052@hotmail.com> wrote:
> Ok.  What am I missing here?  B_USER is a defined Group Role
>
> CREATE ROLE "B_USER"
>   NOSUPERUSER NOINHERIT NOCREATEDB NOCREATEROLE;
>
>
> GRANT SELECT PRIVILEGES
> ON b.config_itm
> TO ROLE B_USER;
>
> Nets this ---------------------------------------
>
> ERROR:  syntax error at or near "B_USER"
> LINE 3: TO ROLE B_USER;
>                 ^
>
> ********** Error **********
>
> ERROR: syntax error at or near "B_USER"
> SQL state: 42601
> Character: 42
>
>

Syntax in the docs seems to indicate that "TO ROLE B_USER;" should
just be "TO B_USER;"

Re: Grant question

От
Bosco Rama
Дата:
Michael Black wrote:
>
> Ok.  What am I missing here?  B_USER is a defined Group Role
>
> CREATE ROLE "B_USER"

You used double-quotes here.  This will preserve case and any
non-standard identifier characters (spaces, punctuation, etc.)

> TO ROLE B_USER;

And, thus, you need to use them here.  Without the double-quotes
PG assumes lowercase.

HTH

Bosco.


Re: pg_dump slow with bytea data

От
"chris r."
Дата:
Merlin, first of all, thanks for your reply!

> hm.  where exactly is all this time getting spent?  Are you i/o bound?
> cpu bound? Is there any compression going on?
Very good questions. pg_dump -F c compresses per default "at a moderate
level" (manpage), whatever compression level 'moderate' actually means.
Thus, yes, without explicitly activating it, we use compression.

For testing, I inserted a fraction of our huge table with bytea content
to the table 'testtable'. The next three outputs compare pg_dump for
this table with default compression level, no compression and low-level
compression on level 3. The time spent seems CPU-bound, as in the first
test case 90-100% of a CPU-core is used all over the time.


(default compression)
time pg_dump -f /tmp/test.sql -F c -t testtable mydb
real    0m27.255s
user    0m26.383s
sys     0m0.180s

(low-level compression)
time pg_dump -f /tmp/test.sql -F c -Z 3 -t testtable mydb
real    0m8.883s
user    0m8.112s
sys     0m0.161s

(no compression)
time pg_dump -f /tmp/test.sql -F c -Z 0 -t testtable mydb
real    0m1.892s
user    0m0.074s
sys     0m0.279s

To summarize, in our case-scenario, moderate-level compression caused a
speed-loss of factor 14.



In another test, I'll compare pg_dump of a table with textual content
that I created stupidly with:

 select (t/23.0)::text||(t/17.0)::text
 into testtable
 from generate_series(1, 1000000) t;

Very much to my surprise, dumping this table did not show such a huge
difference when using compression: a default-compressed pg_dump took
2.4s, whereas a non-compressed pg_dump took 2.0s (which is merely factor
1.2x). However, when expanding the series to 3 mio (instead of 1 mio),
the compressed pg_dump took 7.0s, whereas a non-compressed pg_dump ran
for 2.4s only (factor 3x). Does this show that compression takes
relatively longer the more data it needs to compress? Memory consumption
was less than 12 MB during testing.


> Maybe this is a
> performance issue inside pg_dump itself, not necessarily a text/binary
> issue (i have a hard time believing going from b64->hex is 10x slower
> on format basis alone).  Can you post times comparing manual COPY via
> text, manual COPY via binary, and pg_dump -F c?
Again, valid points. As a next step, I'll compare the COPY variants.

time psql mydb -c "COPY testtable TO '/tmp/test.sql' WITH (FORMAT 'text');"
real    0m1.712s
user    0m0.001s
sys     0m0.004s

In text format, the time it takes to COPY testtable seems very much like
the time it takes to run pg_dump without compression.

Interestingly, COPYing testtable with binary format gives another factor
3.6x speedup:

time psql mydb -c "COPY testtable TO '/tmp/test.sql' WITH (FORMAT
'binary');"
real    0m0.470s
user    0m0.000s
sys     0m0.005s

As one may argue the table was too small to compare the runtime, I
repeated this second comparison with two larger tables - both times
showing between 5x-6x speedup with binary format! In either format the
operation seemed CPU bound (> 95% of a core was taken).



To summarize, I could speed up my backup by removing compression (factor
14) and using COPY in binary format instead of pg_dump (factor 5 to
factor 6). However, only the first option would keep data integrity. To
have an easy integrity-save backup, IMHO, the second option can only be
achieved by having an additional switch in pg_dump allowing for binary
output.


Any comments on these measurements? Thanks again for your input!

Regards,
Chris

Re: pg_dump slow with bytea data

От
Merlin Moncure
Дата:
On Mon, Mar 7, 2011 at 7:28 AM, chris r. <chricki@gmx.net> wrote:
> Merlin, first of all, thanks for your reply!
>
>> hm.  where exactly is all this time getting spent?  Are you i/o bound?
>> cpu bound? Is there any compression going on?
> Very good questions. pg_dump -F c compresses per default "at a moderate
> level" (manpage), whatever compression level 'moderate' actually means.
> Thus, yes, without explicitly activating it, we use compression.
>
> For testing, I inserted a fraction of our huge table with bytea content
> to the table 'testtable'. The next three outputs compare pg_dump for
> this table with default compression level, no compression and low-level
> compression on level 3. The time spent seems CPU-bound, as in the first
> test case 90-100% of a CPU-core is used all over the time.
>
>
> (default compression)
> time pg_dump -f /tmp/test.sql -F c -t testtable mydb
> real    0m27.255s
> user    0m26.383s
> sys     0m0.180s
>
> (low-level compression)
> time pg_dump -f /tmp/test.sql -F c -Z 3 -t testtable mydb
> real    0m8.883s
> user    0m8.112s
> sys     0m0.161s
>
> (no compression)
> time pg_dump -f /tmp/test.sql -F c -Z 0 -t testtable mydb
> real    0m1.892s
> user    0m0.074s
> sys     0m0.279s
>
> To summarize, in our case-scenario, moderate-level compression caused a
> speed-loss of factor 14.

right -- well in the short term it looks like you should consider
lowering or disabling compression.

> In another test, I'll compare pg_dump of a table with textual content
> that I created stupidly with:
>
>  select (t/23.0)::text||(t/17.0)::text
>  into testtable
>  from generate_series(1, 1000000) t;
>
> Very much to my surprise, dumping this table did not show such a huge
> difference when using compression: a default-compressed pg_dump took
> 2.4s, whereas a non-compressed pg_dump took 2.0s (which is merely factor
> 1.2x). However, when expanding the series to 3 mio (instead of 1 mio),
> the compressed pg_dump took 7.0s, whereas a non-compressed pg_dump ran
> for 2.4s only (factor 3x). Does this show that compression takes
> relatively longer the more data it needs to compress? Memory consumption
> was less than 12 MB during testing.

Most compression algs don't use a lot of memory.  Also, as a general
rule of thumb low entropy data compresses must faster than high
entropy data so you can't really compare synthetic tests like that to
real world data as you discovered.

Unfortunately, compression is something of a weak point for the
postgres project: there are much better bang/buck ratio algorithms out
there that we can't use because of licensing or patent concerns.
There are a lot of easy workarounds though (like rigging command line
compressor post dump) so it isn't really a big deal for backups.  You
may want to investigate if your bytea columns are being toast
compressed and look there if you are having performance issues.

>> Maybe this is a
>> performance issue inside pg_dump itself, not necessarily a text/binary
>> issue (i have a hard time believing going from b64->hex is 10x slower
>> on format basis alone).  Can you post times comparing manual COPY via
>> text, manual COPY via binary, and pg_dump -F c?
> Again, valid points. As a next step, I'll compare the COPY variants.
>
> time psql mydb -c "COPY testtable TO '/tmp/test.sql' WITH (FORMAT 'text');"
> real    0m1.712s
> user    0m0.001s
> sys     0m0.004s
>
> In text format, the time it takes to COPY testtable seems very much like
> the time it takes to run pg_dump without compression.
>
> Interestingly, COPYing testtable with binary format gives another factor
> 3.6x speedup:
>
> time psql mydb -c "COPY testtable TO '/tmp/test.sql' WITH (FORMAT
> 'binary');"
> real    0m0.470s
> user    0m0.000s
> sys     0m0.005s
>
> As one may argue the table was too small to compare the runtime, I
> repeated this second comparison with two larger tables - both times
> showing between 5x-6x speedup with binary format! In either format the
> operation seemed CPU bound (> 95% of a core was taken).
>
>
>
> To summarize, I could speed up my backup by removing compression (factor
> 14) and using COPY in binary format instead of pg_dump (factor 5 to
> factor 6). However, only the first option would keep data integrity. To
> have an easy integrity-save backup, IMHO, the second option can only be
> achieved by having an additional switch in pg_dump allowing for binary
> output.

Well, that's a pretty telling case, although I'd venture to say not
typical.  In average databases, I'd expect 10-50% range of improvement
going from text->binary which is often not enough to justify the
compatibility issues.  Does it justify a 'binary' switch to pg_dump?
I'd say so -- as long as the changes required aren't to extensive
(although you can expect disagreement on that point).  hm. i'll take a
look...

merlin

Re: pg_dump slow with bytea data

От
Merlin Moncure
Дата:
On Mon, Mar 7, 2011 at 8:52 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
> Well, that's a pretty telling case, although I'd venture to say not
> typical.  In average databases, I'd expect 10-50% range of improvement
> going from text->binary which is often not enough to justify the
> compatibility issues.  Does it justify a 'binary' switch to pg_dump?
> I'd say so -- as long as the changes required aren't to extensive
> (although you can expect disagreement on that point).  hm. i'll take a
> look...

The changes don't look too bad, but are not trivial.

On the backup side, it just does a text/binary agnostic copy direct to
stdout.  You'd need to create a switch of course, and I'm assuming add
a flag isbinary to ArchiveHandle and possibly a stream length to the
tocEntry for each table (or should this just be header to the binary
stream?).  On the restore side it's a bit more complicated -- the
current code is a completely text monster, grepping each line for
unquoted newline, assuming ascii '0' is the end of the data, etc.  You
would need a completely separate code path for binary, but it would be
much smaller and simpler (and faster!).  There might be some other
issues too, I just did a cursory scan of the code.

merlin