Andres Freund <andres@2ndquadrant.com> wrote:
> Kevin Grittner <kgrittn@ymail.com> wrote:
>> (2) An initial performance test didn't look very good. I will be
>> running a more controlled test to confirm but the logical
>> replication of a benchmark with a lot of UPDATEs of compressed text
>> values seemed to suffer with the logical replication turned on.
>> Any suggestions or comments on that front, before I run the more
>> controlled benchmarks?
>
> Hm. There theoretically shouldn't actually be anything added in that
> path. Could you roughly sketch what that test is doing? Do you actually
> stream those changes out or did you just turn on wal_level=logical?
It was an update of a every row in a table of 720000 rows, with
each row updated by primary key using a separate UPDATE statement,
modifying a large text column with a lot of repeating characters
(so compressed well). I got a timing on a master build and I got a
timing with the patch in the environment used by
test_logical_decoding. It took several times as long in the latter
run, but it was very much a preliminary test in preparation for
getting real numbers. (I'm sure you know how much work it is to
set up for a good run of tests.) I'm not sure that (for example)
the synchronous_commit setting was the same, which could matter a
lot. I wouldn't put a lot of stock in it until I can re-create it
under a much more controlled test.
The one thing about the whole episode that gave me pause was that
the compression and decompression routines were very high on the
`perf top` output in the patched run and way down the list on the
run based on master. I don't have a ready explanation for that,
unless your branch was missing a recent commit for speeding
compression which was present on master. It might be worth
checking that you're not detoasting more often than you need.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company