Обсуждение: Speedier count(*)

Поиск
Список
Период
Сортировка

Speedier count(*)

От
Dan Harris
Дата:
I have a web page for my customers that shows them count of records
and some min/max date ranges in each table of a database, as this is
how we bill them for service.  They can log in and check the counts
at any time.  I'd like for the counts to be as fresh as possible by
keeping this dynamic, but I will use a periodic 'snapshot'/cron job
if that is the only option to speed this up.   I have thought about
using the table statistics, but the estimate error is probably
unacceptable because of the billing purposes.

For some reason, the SQL Server we migrated the app from can return
count(*) in a split second on multi-million row tables, even though
it is a MUCH slower box hardware-wise, but it's now taking many
seconds to run. I have read in the archives the problems MVCC brings
into the count(*) dilemma forcing Pg to run a seq scan to get
counts.  Does SQLServer not use MVCC or have they found another
approach for arriving at this number?  Compounding all the min/max
and counts from other tables and all those queries take about a
minute to run. The tables will contain anywhere from 1 million to 40
million rows.

Also, I am using "select ... group by ... order by .. limit 1" to get
the min/max since I have already been bit by the issue of min() max()
being slower.


-Dan



Re: Speedier count(*)

От
"Joshua D. Drake"
Дата:
> Also, I am using "select ... group by ... order by .. limit 1" to get
> the min/max since I have already been bit by the issue of min() max()
> being slower.

This specific instance is fixed in 8.1

Sincerely,

Joshua D. Drake


>
>
> -Dan
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match


Re: Speedier count(*)

От
Michael Fuhr
Дата:
On Wed, Aug 10, 2005 at 05:37:49PM -0600, Dan Harris wrote:
> Also, I am using "select ... group by ... order by .. limit 1" to get
> the min/max since I have already been bit by the issue of min() max()
> being slower.

PostgreSQL 8.1 will have optimizations for certain MIN and MAX
queries.

http://archives.postgresql.org/pgsql-committers/2005-04/msg00163.php
http://archives.postgresql.org/pgsql-committers/2005-04/msg00168.php

--
Michael Fuhr

Re: Speedier count(*)

От
John A Meinel
Дата:
Dan Harris wrote:
> I have a web page for my customers that shows them count of records  and
> some min/max date ranges in each table of a database, as this is  how we
> bill them for service.  They can log in and check the counts  at any
> time.  I'd like for the counts to be as fresh as possible by  keeping
> this dynamic, but I will use a periodic 'snapshot'/cron job  if that is
> the only option to speed this up.   I have thought about  using the
> table statistics, but the estimate error is probably  unacceptable
> because of the billing purposes.
>
> For some reason, the SQL Server we migrated the app from can return
> count(*) in a split second on multi-million row tables, even though  it
> is a MUCH slower box hardware-wise, but it's now taking many  seconds to
> run. I have read in the archives the problems MVCC brings  into the
> count(*) dilemma forcing Pg to run a seq scan to get  counts.  Does
> SQLServer not use MVCC or have they found another  approach for arriving
> at this number?  Compounding all the min/max  and counts from other
> tables and all those queries take about a  minute to run. The tables
> will contain anywhere from 1 million to 40  million rows.

I believe SQL Server doesn't use MVCC in the same way. At the very
least, it stores some row information in the index, so it can get some
info from just an index, without having to go to the actual page (MVCC
requires a main page visit to determine visibility.)

Depending on how much it impacts performance, you can create an
INSERT/UPDATE trigger so that whenever a new entry is added, it
automatically updates a statistics table. It would be maintained as you
go, rather than periodically like a cron job.

I would go Cron if things can be slightly out of date (like 1 hour at
least), and you need updates & inserts to not be slowed down.
Otherwise I think the trigger is nicer, since it doesn't do redundant
work, and means everything stays up-to-date.


>
> Also, I am using "select ... group by ... order by .. limit 1" to get
> the min/max since I have already been bit by the issue of min() max()
> being slower.
>
>
> -Dan

John
=:->

Вложения

Re: Speedier count(*)

От
Gavin Sherry
Дата:
Hi Dan,

On Wed, 10 Aug 2005, Dan Harris wrote:

> I have a web page for my customers that shows them count of records
> and some min/max date ranges in each table of a database, as this is
> how we bill them for service.  They can log in and check the counts
> at any time.  I'd like for the counts to be as fresh as possible by
> keeping this dynamic, but I will use a periodic 'snapshot'/cron job
> if that is the only option to speed this up.   I have thought about
> using the table statistics, but the estimate error is probably
> unacceptable because of the billing purposes.
>
> For some reason, the SQL Server we migrated the app from can return
> count(*) in a split second on multi-million row tables, even though
> it is a MUCH slower box hardware-wise, but it's now taking many
> seconds to run. I have read in the archives the problems MVCC brings
> into the count(*) dilemma forcing Pg to run a seq scan to get
> counts.  Does SQLServer not use MVCC or have they found another

SQL Server probably jumps through a lot of hoops to do fast count(*)s. I'm
sure we could do something similar -- it's just a question of complexity,
resources, desirability, etc. The are other solutions, which makes the
idea of doing it less attractive still.

> approach for arriving at this number?  Compounding all the min/max
> and counts from other tables and all those queries take about a
> minute to run. The tables will contain anywhere from 1 million to 40
> million rows.
>
> Also, I am using "select ... group by ... order by .. limit 1" to get
> the min/max since I have already been bit by the issue of min() max()
> being slower.

I generally pre generate the results. There are two ways to do this: the
'snapshot'/cronjon you mentioned or using rules and triggers to maintain
'count' tables. The idea is that if data is added, modified or removed
from your table, you modify counters in these other tables.

Alternatively, feel free to post your schema and sample queries with
explain analyze results to this list. Alternatively, jump on irc at
irc.freenode.net #postgresql and someone will be more than happy to look
through the problem in more detail.

Thanks,

Gavin

Re: Speedier count(*)

От
Mark Cotner
Дата:
Here's a trigger I wrote to perform essentially the same purpose.  The nice
thing about this is it keeps the number up to date for you, but you do incur
slight overhead.

CREATE TABLE test (id serial primary key, name varchar(20));

CREATE TABLE rowcount (tablename varchar(50), rowcount bigint default 0);
CREATE INDEX rowcount_tablename ON rowcount(tablename);

CREATE OR REPLACE FUNCTION del_rowcount() RETURNS trigger AS $$
BEGIN
 UPDATE rowcount SET rowcount = rowcount-1 WHERE tablename = TG_RELNAME;
 RETURN OLD;
END;
$$ LANGUAGE PLPGSQL;

CREATE OR REPLACE FUNCTION add_rowcount() RETURNS trigger AS $$
BEGIN
 UPDATE rowcount SET rowcount = rowcount+1 WHERE tablename = TG_RELNAME;
 RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;

CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE
   PROCEDURE del_rowcount();
CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE
   PROCEDURE add_rowcount();

INSERT INTO rowcount (tablename) VALUES ('test');

root=# select * from test;
 id | name
----+------
(0 rows)

Time: 0.934 ms
root=# select * from rowcount;
 tablename | rowcount
-----------+----------
 test      |        0
(1 row)

Time: 0.630 ms
root=# insert into test (name) values ('blah');
INSERT 1190671626 1
Time: 3.278 ms
root=# select * from test;
 id | name
----+------
  5 | blah
(1 row)

Time: 0.612 ms
root=# select * from rowcount;
 tablename | rowcount
-----------+----------
 test      |        1
(1 row)

Time: 0.640 ms
root=# insert into test (name) values ('blah');
INSERT 1190671627 1
Time: 1.677 ms
root=# select * from test;
 id | name
----+------
  5 | blah
  6 | blah
(2 rows)

Time: 0.653 ms
root=# select * from rowcount;
 tablename | rowcount
-----------+----------
 test      |        2
(1 row)

Time: 0.660 ms
root=# delete from test where id = 6;
DELETE 1
Time: 2.412 ms
root=# select * from test;
 id | name
----+------
  5 | blah
(1 row)

Time: 0.631 ms
root=# select * from rowcount;
 tablename | rowcount
-----------+----------
 test      |        1
(1 row)

Time: 0.609 ms

One thing to be mindful of . . . Truncate is NOT accounted for with this,
and unfortunately the rule system doesn't allow truncate operations so you
can't work around it that way.

'njoy,
Mark


On 8/10/05 11:52 PM, "Gavin Sherry" <swm@alcove.com.au> wrote:

> Hi Dan,
>
> On Wed, 10 Aug 2005, Dan Harris wrote:
>
>> I have a web page for my customers that shows them count of records
>> and some min/max date ranges in each table of a database, as this is
>> how we bill them for service.  They can log in and check the counts
>> at any time.  I'd like for the counts to be as fresh as possible by
>> keeping this dynamic, but I will use a periodic 'snapshot'/cron job
>> if that is the only option to speed this up.   I have thought about
>> using the table statistics, but the estimate error is probably
>> unacceptable because of the billing purposes.
>>
>> For some reason, the SQL Server we migrated the app from can return
>> count(*) in a split second on multi-million row tables, even though
>> it is a MUCH slower box hardware-wise, but it's now taking many
>> seconds to run. I have read in the archives the problems MVCC brings
>> into the count(*) dilemma forcing Pg to run a seq scan to get
>> counts.  Does SQLServer not use MVCC or have they found another
>
> SQL Server probably jumps through a lot of hoops to do fast count(*)s. I'm
> sure we could do something similar -- it's just a question of complexity,
> resources, desirability, etc. The are other solutions, which makes the
> idea of doing it less attractive still.
>
>> approach for arriving at this number?  Compounding all the min/max
>> and counts from other tables and all those queries take about a
>> minute to run. The tables will contain anywhere from 1 million to 40
>> million rows.
>>
>> Also, I am using "select ... group by ... order by .. limit 1" to get
>> the min/max since I have already been bit by the issue of min() max()
>> being slower.
>
> I generally pre generate the results. There are two ways to do this: the
> 'snapshot'/cronjon you mentioned or using rules and triggers to maintain
> 'count' tables. The idea is that if data is added, modified or removed
> from your table, you modify counters in these other tables.
>
> Alternatively, feel free to post your schema and sample queries with
> explain analyze results to this list. Alternatively, jump on irc at
> irc.freenode.net #postgresql and someone will be more than happy to look
> through the problem in more detail.
>
> Thanks,
>
> Gavin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@postgresql.org so that your
>        message can get through to the mailing list cleanly



Re: Speedier count(*)

От
Tino Wildenhain
Дата:
Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> Here's a trigger I wrote to perform essentially the same purpose.  The nice
> thing about this is it keeps the number up to date for you, but you do incur
> slight overhead.
...
>
> CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE
>    PROCEDURE del_rowcount();
> CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE
>    PROCEDURE add_rowcount();
>
> INSERT INTO rowcount (tablename) VALUES ('test');
...

beware of problems with concurrency and even what happens
if transactions roll back. Maybe you can "fix" it a bit
by regulary correcting the count via cronjob or so.


Re: Speedier count(*)

От
Gavin Sherry
Дата:
On Thu, 11 Aug 2005, Tino Wildenhain wrote:

> Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> > Here's a trigger I wrote to perform essentially the same purpose.  The nice
> > thing about this is it keeps the number up to date for you, but you do incur
> > slight overhead.
> ...
> >
> > CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE
> >    PROCEDURE del_rowcount();
> > CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE
> >    PROCEDURE add_rowcount();
> >
> > INSERT INTO rowcount (tablename) VALUES ('test');
> ...
>
> beware of problems with concurrency and even what happens
> if transactions roll back. Maybe you can "fix" it a bit
> by regulary correcting the count via cronjob or so.

What problems? MVCC takes care of this.

Gavin

Re: Speedier count(*)

От
Tino Wildenhain
Дата:
Am Donnerstag, den 11.08.2005, 20:36 +1000 schrieb Gavin Sherry:
> On Thu, 11 Aug 2005, Tino Wildenhain wrote:
>
> > Am Donnerstag, den 11.08.2005, 00:40 -0400 schrieb Mark Cotner:
> > > Here's a trigger I wrote to perform essentially the same purpose.  The nice
> > > thing about this is it keeps the number up to date for you, but you do incur
> > > slight overhead.
> > ...
> > >
> > > CREATE TRIGGER del_rowcount_tr BEFORE DELETE ON test FOR EACH ROW EXECUTE
> > >    PROCEDURE del_rowcount();
> > > CREATE TRIGGER add_rowcount_tr BEFORE INSERT ON test FOR EACH ROW EXECUTE
> > >    PROCEDURE add_rowcount();
> > >
> > > INSERT INTO rowcount (tablename) VALUES ('test');
> > ...
> >
> > beware of problems with concurrency and even what happens
> > if transactions roll back. Maybe you can "fix" it a bit
> > by regulary correcting the count via cronjob or so.
>
> What problems? MVCC takes care of this.

Actually in this case MVCC works against you.
Just imagine some competing transactions to insert
end delete at will.

You could lock the count table to prevent the problem
where 2 competing transactions do an insert, read the
start value and add 1 to it and then write the result
- which is n+1 rather then n+2 - so you are off by one.
Think of the same when one transaction inserts 100
and the other 120. Then you could even be off by 100.

But locking probably gets your worser performance then
simply count(*) all the time if you insert a lot. Also
prepare for the likeness of deadlocks.


Re: Speedier count(*)

От
PFC
Дата:

> You could lock the count table to prevent the problem
> where 2 competing transactions do an insert, read the
> start value and add 1 to it and then write the result
> - which is n+1 rather then n+2 - so you are off by one.
> Think of the same when one transaction inserts 100
> and the other 120. Then you could even be off by 100.

    Niet.

    If your trigger does UPDATE counts_cache SET cached_count =
cached_count+N WHERE ...
    Then all locking is taken care of by Postgres.
    Of course if you use 2 queries then you have locking issues.

    However the UPDATE counts_cache has a problem, ie. it locks this row FOR
UPDATE for the whole transaction, and all transactions which want to
update the same row must wait to see if the update commits or rollbacks,
so if you have one count cache row for the whole table you get MySQL style
scalability...

    To preserve scalability you could, instead of UPDATE, INSERT the delta of
rows inserted/deleted in a table (which has no concurrencies issues) and
compute the current count with the sum() of the deltas, then with a cron,
consolidate the deltas and update the count_cache table so that the deltas
table stays very small.

Re: Speedier count(*)

От
Tino Wildenhain
Дата:
Am Donnerstag, den 11.08.2005, 14:08 +0200 schrieb PFC:
>
> > You could lock the count table to prevent the problem
> > where 2 competing transactions do an insert, read the
> > start value and add 1 to it and then write the result
> > - which is n+1 rather then n+2 - so you are off by one.
> > Think of the same when one transaction inserts 100
> > and the other 120. Then you could even be off by 100.
>
>     Niet.
>
>     If your trigger does UPDATE counts_cache SET cached_count =
> cached_count+N WHERE ...
>     Then all locking is taken care of by Postgres.
>     Of course if you use 2 queries then you have locking issues.

Yes, in the case you use just the UPDATE statement you are right. This
does the locking I was talking about.

In either case I'd use an after trigger and not before to minimize
the impact.

>     However the UPDATE counts_cache has a problem, ie. it locks this row FOR
> UPDATE for the whole transaction, and all transactions which want to
> update the same row must wait to see if the update commits or rollbacks,
> so if you have one count cache row for the whole table you get MySQL style
> scalability...
>
>     To preserve scalability you could, instead of UPDATE, INSERT the delta of
> rows inserted/deleted in a table (which has no concurrencies issues) and
> compute the current count with the sum() of the deltas, then with a cron,
> consolidate the deltas and update the count_cache table so that the deltas
> table stays very small.

Yes, this is in fact a better approach to this problem.

(All this provided you want an unqualified count() - as the
 original poster)




Re: Speedier count(*)

От
Dan Harris
Дата:
Thanks for all the great ideas.  I have more options to evaluate now.

-Dan