Обсуждение: Performance problem with row count trigger

Поиск
Список
Период
Сортировка

Performance problem with row count trigger

От
Tony Cebzanov
Дата:
I was looking to speed up a count(*) query, as per the recommendations
on the postgres wiki:

http://wiki.postgresql.org/wiki/Slow_Counting

I decided to use the trigger approach to get an accurate count that
doesn't depend on VACUUM being run recently.  I've got it working, but
the addition of the trigger slows things down so bad that it's not a
viable option.  I was hoping for advice on how to speed things up, or at
least an explanation of why it gets so slow.

The relevant tables are as follows:

---------------------------------------------------------------------------
CREATE TABLE dataset(   dataset_id SERIAL PRIMARY KEY,   catalog_id INTEGER REFERENCES catalog (catalog_id) ON DELETE
CASCADE,  t_begin TIMESTAMP WITHOUT TIME ZONE NULL,   t_end TIMESTAMP WITHOUT TIME ZONE NULL,   "ctime" TIMESTAMP
WITHOUTTIME ZONE NOT NULL DEFAULT now(),   "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT now(),   "assoc_count"
BIGINTNOT NULL DEFAULT 0
 
)

CREATE TABLE assoc (   dataset_id INTEGER REFERENCES dataset (dataset_id) ON DELETE CASCADE,   range ip4r NOT NULL,
label_idINTEGER NULL,   value BIGINT NULL,   PRIMARY KEY (dataset_id, range),   UNIQUE (dataset_id, range, label_id)
 
);
---------------------------------------------------------------------------

What I want to do is update the assoc_count field in the dataset table
to reflect the count of related records in the assoc field.  To do so, I
added the following trigger:

---------------------------------------------------------------------------
CREATE OR REPLACE FUNCTION update_assoc_count_insert()
RETURNS TRIGGER AS
'
BEGIN   UPDATE dataset       SET assoc_count = assoc_count + 1       WHERE dataset_id = NEW.dataset_id;   RETURN NEW;
END
' LANGUAGE plpgsql;

CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc   FOR EACH ROW EXECUTE PROCEDURE
update_assoc_count_insert();
---------------------------------------------------------------------------

(I also have triggers for UPDATE/DELETE, left out for brevity.)

The slowness I'm talking about doesn't show up for a single insert, but
arises when doing thousands of them in a transaction.  Here are some
test runs of 10,000 inserts without the trigger in place:

---------------------------------------------------------------------------
1000 (2231.540142/s)
2000 (2341.849077/s)
3000 (2234.332303/s)
4000 (2311.247629/s)
5000 (2366.171695/s)
6000 (2400.028800/s)
7000 (2407.147716/s)
8000 (2416.419084/s)
9000 (2401.476107/s)
10000 (2406.870943/s)
---------------------------------------------------------------------------

The number in parens is the number of inserts per second for each batch
of 1,000 inserts.  As you can see, performance isn't too terrible, and
is pretty constant from start to finish.

Now I add the trigger, and here's what happens:

---------------------------------------------------------------------------
1000 (1723.216901/s)
2000 (1613.529119/s)
3000 (1526.081496/s)
4000 (1431.907261/s)
5000 (1340.159570/s)
6000 (1269.746140/s)
7000 (1191.374990/s)
8000 (1117.332012/s)
9000 (1056.309389/s)
10000 (1001.051003/s)
---------------------------------------------------------------------------

The throughput of the first batch of 1,000 is diminished, but still
tolerable, but after 10,000 inserts, it's gotten much worse.  This
pattern continues, to the point where performance is unacceptable after
20k or 30k inserts.

To rule out the performance of the trigger mechanism itself, I swapped
the trigger out for one that does nothing.  The results were the same as
without the trigger (the  first set of numbers), which leads me to
believe there's something about the UPDATE statement in the trigger that
is causing this behavior.

I then tried setting the assoc_count to a constant number instead of
trying to increment it with assoc_count = assoc_count + 1, but
performance was just as bad as with the proper UPDATE statement.

I'm not really sure where to go next.  I can update the assoc_count once
at the end of the transaction (without a trigger), but this could lead
to inconsistencies if another client does inserts without updating the
count.  I would really prefer to use the trigger solution recommended on
the PGsql wiki, but can't do so until I solve this performance problem.

I greatly appreciate any and all help.  Thanks.
-Tony




Re: Performance problem with row count trigger

От
Craig Ringer
Дата:
Tony Cebzanov wrote:

> The throughput of the first batch of 1,000 is diminished, but still
> tolerable, but after 10,000 inserts, it's gotten much worse.  This
> pattern continues, to the point where performance is unacceptable after
> 20k or 30k inserts.
> 
> To rule out the performance of the trigger mechanism itself, I swapped
> the trigger out for one that does nothing.  The results were the same as
> without the trigger (the  first set of numbers), which leads me to
> believe there's something about the UPDATE statement in the trigger that
> is causing this behavior.

MVCC bloat from the constant updates to the assoc_count table, maybe?

If you're using 8.3, I'd expect HOT to save you here. Are you using an
older version of PostgreSQL? If not, have you by any chance defined an
index on assoc_count ?

Also, try to keep records in your `dataset' table as narrow as possible.
If the catalog_id, t_begin, t_end, ctime and mtime fields do not change
almost as often as the assoc_count field, split them into a separate
table with a foreign key referencing dataset_id, rather than storing
them directly in the dataset table.

--
Craig Ringer


Re: Performance problem with row count trigger

От
Tony Cebzanov
Дата:
Hi Craig, thanks for your help.

Craig Ringer wrote:
> MVCC bloat from the constant updates to the assoc_count table, maybe?

That's what a coworker suggested might be happening.  The fact that a
no-op trigger performs fine but the UPDATE trigger doesn't would seem to
confirm that it's something in the trigger SQL and not in the trigger
mechanism itself.

> If you're using 8.3, I'd expect HOT to save you here. Are you using an
> older version of PostgreSQL? If not, have you by any chance defined an
> index on assoc_count ?

I'm running 8.3.7, which is the most recent version from Macports.
There's no index of any kind on dataset.assoc_count.

Having read up on HOT, it sounds like it would be helpful.  Is there
anything I need to do to enable HOT in 8.3.7, or is it always used?

> Also, try to keep records in your `dataset' table as narrow as possible.
> If the catalog_id, t_begin, t_end, ctime and mtime fields do not change
> almost as often as the assoc_count field, split them into a separate
> table with a foreign key referencing dataset_id, rather than storing
> them directly in the dataset table.

ctime is the creation time of the dataset, so it's never supposed to be
updated.  mtime is the last time the dataset was changed, and there's
another trigger to update that timestamp whenever the dataset table
changes.  So, at best, I'd be able to remove the ctime column from the
dataset table, but I'd have to add the foreign key, so I don't think
that would be helpful.

Your mention of the ctime and mtime columns made me think the update
timestamp trigger may be contributing to the problem as well (since the
assoc INSERT trigger would presumably cause the dataset UPDATE trigger
to fire), but dropping that trigger yielded no improvement, so I think
it's something else.


Re: Performance problem with row count trigger

От
Tom Lane
Дата:
Tony Cebzanov <tonyceb@andrew.cmu.edu> writes:
> What I want to do is update the assoc_count field in the dataset table
> to reflect the count of related records in the assoc field.  To do so, I
> added the following trigger:

> CREATE OR REPLACE FUNCTION update_assoc_count_insert()
> RETURNS TRIGGER AS
> '
> BEGIN
>     UPDATE dataset
>         SET assoc_count = assoc_count + 1
>         WHERE dataset_id = NEW.dataset_id;
>     RETURN NEW;
> END
> ' LANGUAGE plpgsql;

> CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc
>     FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert();

There is basically no way that this is going to not suck :-(.  In the
first place, using an AFTER trigger means that each update queues an
AFTER trigger update event that has to be fired at statement or
transaction end.  In the second place (as Craig correctly noted) this
results in a separate update to the count-table row for each inserted
row, which tremendously bloats the count table with dead tuples.
In the third place, if you have any concurrency of insertions, it
disappears because all the inserters need to update the same count row.

If you dig in the pgsql-hackers archives, you will find that the
original scheme for this was to have each transaction accumulate its
total number of insertions minus deletions for a table in local memory,
and then insert *one* delta row into the count table just before
transaction commit.  I don't think it's possible to do that with just
user-level triggers (not least because we haven't got ON COMMIT
triggers); it would have to be a C-code addition.  The various blog
entries you cite are non-peer-reviewed oversimplifications of that
design.

Digging around, the oldest description I can find of this idea is
http://archives.postgresql.org/pgsql-hackers/2003-09/msg00387.php
although there are more recent threads rehashing the topic.

One point I don't recall anyone mentioning is that the stats subsystem
now implements a fairly large subset of this work already, namely the
initial data accumulation.  So you could imagine plugging something into
that to send the deltas to a table in addition to the stats collector.
        regards, tom lane


Re: Performance problem with row count trigger

От
Wei Weng
Дата:
On 04/02/2009 03:32 PM, Tom Lane wrote: <blockquote cite="mid:16805.1238700723@sss.pgh.pa.us" type="cite"><pre
wrap="">TonyCebzanov <a class="moz-txt-link-rfc2396E"
href="mailto:tonyceb@andrew.cmu.edu"><tonyceb@andrew.cmu.edu></a>writes: </pre><blockquote type="cite"><pre
wrap="">WhatI want to do is update the assoc_count field in the dataset table
 
to reflect the count of related records in the assoc field.  To do so, I
added the following trigger:   </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">CREATE OR
REPLACEFUNCTION update_assoc_count_insert()
 
RETURNS TRIGGER AS
'
BEGIN   UPDATE dataset       SET assoc_count = assoc_count + 1       WHERE dataset_id = NEW.dataset_id;   RETURN NEW;
END
' LANGUAGE plpgsql;   </pre></blockquote><pre wrap=""> </pre><blockquote type="cite"><pre wrap="">CREATE TRIGGER
assoc_update_assoc_count_insertAFTER INSERT ON assoc   FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert();
</pre></blockquote><prewrap="">
 
There is basically no way that this is going to not suck :-(.  In the
first place, using an AFTER trigger means that each update queues an
AFTER trigger update event that has to be fired at statement or
transaction end.  In the second place (as Craig correctly noted) this
results in a separate update to the count-table row for each inserted
row, which tremendously bloats the count table with dead tuples.
In the third place, if you have any concurrency of insertions, it
disappears because all the inserters need to update the same count row.

If you dig in the pgsql-hackers archives, you will find that the
original scheme for this was to have each transaction accumulate its
total number of insertions minus deletions for a table in local memory,
and then insert *one* delta row into the count table just before
transaction commit.  I don't think it's possible to do that with just
user-level triggers (not least because we haven't got ON COMMIT
triggers); it would have to be a C-code addition.  The various blog
entries you cite are non-peer-reviewed oversimplifications of that
design.

Digging around, the oldest description I can find of this idea is
<a class="moz-txt-link-freetext"
href="http://archives.postgresql.org/pgsql-hackers/2003-09/msg00387.php">http://archives.postgresql.org/pgsql-hackers/2003-09/msg00387.php</a>
although there are more recent threads rehashing the topic.

One point I don't recall anyone mentioning is that the stats subsystem
now implements a fairly large subset of this work already, namely the
initial data accumulation.  So you could imagine plugging something into
that to send the deltas to a table in addition to the stats collector.
        regards, tom lane
 </pre></blockquote><br /> So, basically other than reading from pg_class table about the tuple count, there isn't a
goodway to optimize the COUNT(*)?<br /><br /><br /> Thanks<br /> Wei<br /><br /> 

Re: Performance problem with row count trigger

От
"Greg Sabino Mullane"
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> I was looking to speed up a count(*) query

A few things spring to mind:

1) Use a separate table, rather than storing things inside of
dataset itself. This will reduce the activity on the dataset table.

2) Do you really need bigint for the counts?

3) If you do want to do this, you'll need a different approach as
Tom mentioned. One way to do this is to have a special method for
bulk loading, that gets around the normal updates and requires that
the user take responsiblity for knowing when and how to call the
alternate path. The basic scheme is this:

1. Disable the normal triggers
2. Enable special (perl) triggers that keep the count in memory
3. Do the bulk changes
4. Enable normal triggers, disable special perl one
5. Do other things as needed....
6. Commit the changes to the assoc_count field.

Number 6 can be done anytime, as long as you are in the same session. The danger
is in leaving the session without calling the final function. This can be
solved with some deferred FK trickery, or by careful scripting of the events.
All this doesn't completely remove the pain, but it may shift it around enough
in useful ways for your app.

Here is some code to play with:

- -- Stores changes into memory, no disk access:
CREATE OR REPLACE FUNCTION update_assoc_count_perl()
RETURNS TRIGGER
LANGUAGE plperlu
AS $_$ use strict; my $event = $_TD->{event}; my ($oldid,$newid) = ($_TD->{old}{dataset_id},$_TD->{new}{dataset_id});
if($event eq 'INSERT') {   $_SHARED{foobar}{$newid}++; } elsif ($event eq 'DELETE') {   $_SHARED{foobar}{$oldid}--;
$_SHARED{foobar}{$oldid}||=-1;} elsif ($oldid ne $newid) {   $_SHARED{foobar}{$oldid}--;
$_SHARED{foobar}{$oldid}||=-1;  $_SHARED{foobar}{$newid}++; } return;
 
$_$;

- -- Quick little debug function to view counts:
CREATE OR REPLACE FUNCTION get_assoc_count(int)
RETURNS INTEGER
LANGUAGE plperlu
AS $_$ my $id = shift; return $_SHARED{foobar}{$id} || 0;
$_$;

- -- Create, then disable, the perl trigger
CREATE TRIGGER update_assoc_count_perl AFTER INSERT OR UPDATE OR DELETE ON assoc FOR EACH ROW EXECUTE PROCEDURE
update_assoc_count_perl();

ALTER TABLE assoc DISABLE TRIGGER update_assoc_count_perl;

- -- Switches the main triggers off, and the memory triggers on
- -- Use deferred constraints to ensure that stop_bulkload_assoc_count is called
CREATE OR REPLACE FUNCTION start_bulkload_assoc_count()
RETURNS TEXT
LANGUAGE plperlu
AS $_$ spi_exec_query("ALTER TABLE assoc DISABLE TRIGGER update_assoc_count_insert"); ## x 3 as needed
spi_exec_query("ALTERTABLE assoc ENABLE TRIGGER update_assoc_count_perl"); -- Put foreign key magic here return 'Ready
tobulkload';
 
$_$;

- -- Switches the triggers back, and allows a commit to proceed
CREATE OR REPLACE FUNCTION end_bulkload_assoc_count()
RETURNS TEXT
LANGUAGE plperlu
AS $_$ my $sth = spi_prepare(   'UPDATE dataset SET assoc_count = assoc_count + $1 WHERE dataset_id = $2',   'INTEGER',
'INTEGER');for my $id (keys %{$_SHARED{foobar}}) {   my $val = $_SHARED{foobar}{$id};
spi_exec_prepared($sth,$val,$id);} spi_exec_query("ALTER TABLE assoc ENABLE TRIGGER update_assoc_count_insert"); ## x3
etc.spi_exec_query("ALTER TABLE assoc DISABLE TRIGGER update_assoc_count_perl"); -- Put FK magic here return 'Bulk load
complete';
$_$;

- -- Usage:
SELECT start_bulkload_assoc_count();
- -- Lots of inserts and updates
SELECT end_bulkload_assoc_count();


- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200904021644
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAknVJiUACgkQvJuQZxSWSsisTQCg4iPr4fepGc/wA3LBUMLz13Gj
aEsAoLFB/KbA572VNKooa2a82Ok4DKUy
=Z95U
-----END PGP SIGNATURE-----




Re: Performance problem with row count trigger

От
Tony Cebzanov
Дата:
Greg Sabino Mullane wrote:
> A few things spring to mind:
> 
> 1) Use a separate table, rather than storing things inside of
> dataset itself. This will reduce the activity on the dataset table.

A separate table just for that one column?  Would that really help,
given that I'd have to add the foreign key dataset_id to the related
table?  How does splitting activity across dataset and, say,
dataset_counts help things?

> 2) Do you really need bigint for the counts?

Probably not.  Still, changing to INTEGER hasn't changed the performance
in any measurable way.

> 3) If you do want to do this, you'll need a different approach as
> Tom mentioned. One way to do this is to have a special method for
> bulk loading, that gets around the normal updates and requires that
> the user take responsiblity for knowing when and how to call the
> alternate path. The basic scheme is this:
> 
> 1. Disable the normal triggers
> 2. Enable special (perl) triggers that keep the count in memory
> 3. Do the bulk changes
> 4. Enable normal triggers, disable special perl one
> 5. Do other things as needed....
> 6. Commit the changes to the assoc_count field.

I gave this a shot, and my initial testing looks very promising.  Using
your scheme, the performance looks to be just as good as without the
trigger.

I haven't done any kind of edge case testing to see if weird things
happen when multiple datasets are added simultaneously, or when someone
inserts an assoc record out-of-band while a bulk dataset load is
happening, but you've certainly got me well on my way to a workable
solution.  Many thanks!

There's one part I don't get, though...

> Number 6 can be done anytime, as long as you are in the same session. The danger
> is in leaving the session without calling the final function. This can be
> solved with some deferred FK trickery, or by careful scripting of the events.
> All this doesn't completely remove the pain, but it may shift it around enough
> in useful ways for your app.

I'm not sure I understand the danger you're talking about here.  Doesn't
putting the whole start_bulkload_assoc_count(), bulk insert, and
end_bulkload_assoc_count() process in a transaction save me from any of
these problems?  Or is there some more subtle problem I'm not seeing?


Re: Performance problem with row count trigger

От
"Greg Sabino Mullane"
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> A separate table just for that one column?  Would that really help,
> given that I'd have to add the foreign key dataset_id to the related
> table?  How does splitting activity across dataset and, say,
> dataset_counts help things?

Well, it reduce the size (and bloat) of the main table and let's you offload
the indexes as well. (especially important for pre-HOT systems). It's also
a win if there are many queries against the main table that *don't* hit the
summary count column. The cost of another table join for queries that *do* hit
it is probably offset by keeping the main table small and only updated when it
really needs to be. Still, it depends a lot on your particular circumstances;
the thread was started because of the pain of updating this one column, but
only you can make the final call about whether a separate table is a Good Idea
or a Silly Microoptimization.

>> 2) Do you really need bigint for the counts?

> Probably not.  Still, changing to INTEGER hasn't changed the performance
> in any measurable way.

I was simply thinking about the amount of space used here, not speed. Of course,
if there is any conceivable way that the amounts in question would *ever* exceed
two billion, you should keep it BIGINT, as changing it later would be painful.

> I'm not sure I understand the danger you're talking about here.  Doesn't
> putting the whole start_bulkload_assoc_count(), bulk insert, and
> end_bulkload_assoc_count() process in a transaction save me from any of
> these problems?  Or is there some more subtle problem I'm not seeing?

No, that would work perfectly fine. The danger I was referring to was someone
calling the first function and then committing before they remembered
to call the second one. It pays to be paranoid around databases :), but if
you have control of the environment, and that scenario seems unlikely,
it should be fine the way it is.

Oh, and I just remembered that the end_.. function should be clearing the
temporary hash we build up - I think the version I emailed neglected to do that.
Wouldn't want those numbers to stick around in the session.

- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200904061028
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAknaEdIACgkQvJuQZxSWSsgeeACfQRXopdyHdYoj5SLTiwedIYAc
bDUAoNvouyYtixHeXLowWqYr9Oc/jS/t
=sJ+s
-----END PGP SIGNATURE-----