Chris Browne wrote:
> zb@cybertec.at (Zoltan Boszormenyi) writes:
>> SELECT COUNT(*)
> [Waving hands for a moment]
Would what Chris describes below be a good candidate for
a pgfoundry project that has functions that'll create the
triggers for you? (yeah, I might be volunteering, but would
undoubtedly need help)
Then when people ask it again and again the response could
be "install http://pgfoundry.org/fast_count_star" rather
than "go read the mailing list archives and roll
your own - but remember to worry about deadlock and
contention on the table containing the counts".
> What I would do *instead* would be for each INSERT to add a tuple with
> a count of 1, and for each DELETE to add a tuple with count of -1, and
> then to periodically have a single process walk through to summarize
> the table. There may be a further optimization to be had by doing a
> per-statement trigger that counts the number of INSERTs/DELETEs done,
> so that inserting 30 tuples (in the table being tracked) leads to
> adding a single tuple with count of 30 in the summary table.
>
> That changes the tradeoffs, again...
>
> - Since each INSERT/DELETE is simply doing an INSERT into the summary
> table, the ongoing activity is *never* blocking anything
>
> - You get the count by requesting
> SELECT SUM(rec_cnt) as record_count from rec_count where tablename = 'foo';
>
> - Note that the query is MVCC-consistent with the table!
>
> - Once in a while, you'll want to run a single procedure that, for
> each table, deletes all the existing records, and replaces them
> with a single one consisting of the sum of the individual values.
>
> - You can re-sync a table by running the query:
> begin;
> delete from record_count where tablename = 'foo';
> insert into record_count(tablename, rec_cnt) select 'foo', (select count(*) from foo);
> commit;