Обсуждение: Speeding up aggregates

Поиск
Список
Период
Сортировка

Speeding up aggregates

От
"Josh Berkus"
Дата:
Folks,

One of Postgres' poorest performing areas is aggregates.   This is the
unfortunate side effect of our fully extensible aggregate and type
system.   However, I thought that the folks on this list might have  a
few tips on making aggregates perform faster.

Here's mine:  Aggregate Caching Table

This is a brute-force approach.   However, if you have a table with a
million records for which users *frequently* ask for grand totals or
counts, it can work fine.

A simple example:

Table client_case_counts (
    client_id INT NOT NULL REFERENCES clients(client_id) ON DELETE
CASCADE;
    no_cases INT NOT NULL DEFAULT 0
);

Then create triggers:

Function tf_maintain_client_counts ()
returns opaque as '
BEGIN
UPDATE client_case_counts SET no_cases = no_cases + 1
WHERE client_id = NEW.client_id;
INSERT INTO client_case_counts ( client_id, no_cases )
VALUES ( NEW.client_id, 1 )
WHERE NOT EXISTS (SELECT client_id FROM client_case_counts ccc2
    WHERE ccc2.client_id = NEW.client_id);
RETURN NEW;
END;' LANGUAGE 'plpgsql';

Trigger tg_maintain_client_counts ON INSERT INTO cases
FOR EACH ROW EXECUTE tf_maintain_client_counts();
etc.

While effective, this approach is costly in terms of update/insert
processing.  It is also limited to whatever aggregate requests you have
anticipated ... it does no good for aggregates over a user-defined
range.

What have other Postgres users done to speed up aggregates on large
tables?

-Josh Berkus






Re: Speeding up aggregates

От
Joe Conway
Дата:
Josh Berkus wrote:
> While effective, this approach is costly in terms of update/insert
> processing.  It is also limited to whatever aggregate requests you have
> anticipated ... it does no good for aggregates over a user-defined
> range.

I think this is where Oracle's materialized views come into play.

>
> What have other Postgres users done to speed up aggregates on large
> tables?

I've found that in most real life applications, expensive aggregate queries
tend to be needed for management reporting, which does not need to be based on
up-to-the-second fresh data. Normally for these types of reports a summary
through say last night at midnight is perfectly adequate.

The simplest solution in these cases is to build a table to hold your
partially or completely summarized data, then report off of that. Use cron to
refresh these summary tables at convenient times (daily, every 2 hours, or
whatever).

Joe


Re: Speeding up aggregates

От
Tom Lane
Дата:
"Josh Berkus" <josh@agliodbs.com> writes:
> What have other Postgres users done to speed up aggregates on large
> tables?

FWIW, I've implemented hashed aggregation in CVS tip.  I have not had
the time to try to benchmark it, but I'd be interested if anyone can
run some tests on 7.4devel.  Eliminating the need for a SORT step
should help aggregations over large datasets.

Note that even though there's no SORT, the sort_mem setting is used
to determine the allowable hashtable size, so a too-small sort_mem
might discourage the planner from selecting hashed aggregation.
Use EXPLAIN to see which query plan gets chosen.

            regards, tom lane

Re: Speeding up aggregates

От
Hannu Krosing
Дата:
Tom Lane kirjutas L, 07.12.2002 kell 01:46:
> "Josh Berkus" <josh@agliodbs.com> writes:
> > What have other Postgres users done to speed up aggregates on large
> > tables?
>
> FWIW, I've implemented hashed aggregation in CVS tip.

Great!

This should also make it easier to implement all kinds of GROUP BY
ROLLUP|CUBE|GROUPING SETS|() queries.

Do you have any near-term plans for doing them ?

> I have not had
> the time to try to benchmark it, but I'd be interested if anyone can
> run some tests on 7.4devel. Eliminating the need for a SORT step
> should help aggregations over large datasets.

Is there a variable to set that would disable one or another, like we
currently have for disabling various join strategies ?

> Note that even though there's no SORT, the sort_mem setting is used
> to determine the allowable hashtable size, so a too-small sort_mem
> might discourage the planner from selecting hashed aggregation.

Do you mean that hashed aggregation can't overflow to disk, or would it
just be too slow ?

--
Hannu Krosing <hannu@tm.ee>

Re: Speeding up aggregates

От
Tom Lane
Дата:
Hannu Krosing <hannu@tm.ee> writes:
> This should also make it easier to implement all kinds of GROUP BY
> ROLLUP|CUBE|GROUPING SETS|() queries.

> Do you have any near-term plans for doing them ?

Not me.

> Is there a variable to set that would disable one or another, like we
> currently have for disabling various join strategies ?

enable_hashagg.  I didn't think about one to prevent the old style.

>> Note that even though there's no SORT, the sort_mem setting is used
>> to determine the allowable hashtable size, so a too-small sort_mem
>> might discourage the planner from selecting hashed aggregation.

> Do you mean that hashed aggregation can't overflow to disk, or would it
> just be too slow ?

I didn't write any code to let it overflow to disk --- didn't seem
likely to be useful.  (You're probably better off with a sort-based
aggregation if there are too many distinct grouping keys.)

            regards, tom lane

Re: Speeding up aggregates

От
Hannu Krosing
Дата:
Hannu Krosing kirjutas L, 07.12.2002 kell 02:32:
> Tom Lane kirjutas L, 07.12.2002 kell 01:46:
> > "Josh Berkus" <josh@agliodbs.com> writes:
> > > What have other Postgres users done to speed up aggregates on large
> > > tables?
> >
> > FWIW, I've implemented hashed aggregation in CVS tip.
>
> Great!
>
> This should also make it easier to implement all kinds of GROUP BY
> ROLLUP|CUBE|GROUPING SETS|() queries.

Of these only ROLLUP can be done in one scan after sort, all others
would generally require several scans without hashing.


I just noticed that we don't even have a TODO for this. I think this
would be a good TODO item.

Bruce, could you add:

* Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY


They are all defined in SQL99 p.79 <group by clause>


Some more background info (from a quick Google search)

a very short overview:
  http://www.neddo.com/dm3e/sql3&olap.html


more thorough guide for DB2:
http://www.student.math.uwaterloo.ca/~cs448/db2_doc/html/db2s0/frame3.htm#db2s0279


-----------------
Hannu


Re: Speeding up aggregates

От
Josh Berkus
Дата:
Tom,

> FWIW, I've implemented hashed aggregation in CVS tip.  I have not had
> the time to try to benchmark it, but I'd be interested if anyone can
> run some tests on 7.4devel.  Eliminating the need for a SORT step
> should help aggregations over large datasets.

I'd love to, but I am still too much of a tyro to build Postgres from CVS.
As soon as there's a tarball of 7.4devel, I'll build it and run comparisons.

--
-Josh Berkus
 Aglio Database Solutions
 San Francisco


Re: Speeding up aggregates

От
Hannu Krosing
Дата:
Tom Lane kirjutas L, 07.12.2002 kell 02:42:
> Hannu Krosing <hannu@tm.ee> writes:
> > This should also make it easier to implement all kinds of GROUP BY
> > ROLLUP|CUBE|GROUPING SETS|() queries.
>
> > Do you have any near-term plans for doing them ?
>
> Not me.

I'll try to look into it then.

No promises about when it will be ready ;)

> > Is there a variable to set that would disable one or another, like we
> > currently have for disabling various join strategies ?
>
> enable_hashagg.  I didn't think about one to prevent the old style.
>
> >> Note that even though there's no SORT, the sort_mem setting is used
> >> to determine the allowable hashtable size, so a too-small sort_mem
> >> might discourage the planner from selecting hashed aggregation.
>
> > Do you mean that hashed aggregation can't overflow to disk, or would it
> > just be too slow ?
>
> I didn't write any code to let it overflow to disk --- didn't seem
> likely to be useful.  (You're probably better off with a sort-based
> aggregation if there are too many distinct grouping keys.)

For simple GROUP BY this is most likely so, but for CUBE or GROUPING SETS
it may still be faster to overflow to disk than to do N passes over data
different ordering.

Of course we could use a combined approach here - do it the old way (sort) for
main body + run a parallel hashed aggregation for other, out of order groups.

------------
Hannu



Re: Speeding up aggregates

От
Hannu Krosing
Дата:
Tom Lane kirjutas L, 07.12.2002 kell 02:42:
> Hannu Krosing <hannu@tm.ee> writes:
> > Is there a variable to set that would disable one or another, like we
> > currently have for disabling various join strategies ?
>
> enable_hashagg.  I didn't think about one to prevent the old style.

could be handy for testing.

--
Hannu Krosing <hannu@tm.ee>

Re: Speeding up aggregates

От
Tom Lane
Дата:
Josh Berkus <josh@agliodbs.com> writes:
> I'd love to, but I am still too much of a tyro to build Postgres from CVS.
> As soon as there's a tarball of 7.4devel, I'll build it and run comparisons.

There should be a nightly snapshot tarball on the FTP server.

            regards, tom lane

Re: Speeding up aggregates

От
Josh Berkus
Дата:
Tom,

We have a winner on simple aggregates:

Version 7.2.3:
 explain analyze select client_id, count(*) from case_clients group by
client_id;
NOTICE:  QUERY PLAN:

Aggregate  (cost=11892.51..12435.75 rows=10865 width=4) (actual
time=1162.27..1569.40 rows=436 loops=1)
  ->  Group  (cost=11892.51..12164.13 rows=108648 width=4) (actual
time=1162.24..1477.70 rows=108648 loops=1)
        ->  Sort  (cost=11892.51..11892.51 rows=108648 width=4) (actual
time=1162.22..1280.64 rows=108648 loops=1)
              ->  Seq Scan on case_clients  (cost=0.00..2804.48 rows=108648
width=4) (actual time=0.07..283.14 rows=108648 loops=1)
Total runtime: 2387.87 msec

Versus Version 7.4devel:
explain analyze select client_id, count(*) from case_clients group by
client_id;
                                                       QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=3289.72..3289.84 rows=46 width=4) (actual
time=447.80..448.71 rows=436 loops=1)
   ->  Seq Scan on case_clients  (cost=0.00..2746.48 rows=108648 width=4)
(actual time=0.08..267.45 rows=108648 loops=1)
 Total runtime: 473.77 msec
(3 rows)

However, more complex queries involving aggregates seem to be unable to make
use of the hashaggregate.   I'll get back to you when I know what the
breakpoint is.

--
-Josh Berkus

______AGLIO DATABASE SOLUTIONS___________________________
                                        Josh Berkus
   Complete information technology     josh@agliodbs.com
    and data management solutions     (415) 565-7293
   for law firms, small businesses      fax 621-2533
    and non-profit organizations.     San Francisco


Re: Speeding up aggregates

От
Ron Johnson
Дата:
On Fri, 2002-12-06 at 14:46, Tom Lane wrote:
> "Josh Berkus" <josh@agliodbs.com> writes:
> > What have other Postgres users done to speed up aggregates on large
> > tables?
>
> FWIW, I've implemented hashed aggregation in CVS tip.  I have not had
> the time to try to benchmark it, but I'd be interested if anyone can
> run some tests on 7.4devel.  Eliminating the need for a SORT step
> should help aggregations over large datasets.
>
> Note that even though there's no SORT, the sort_mem setting is used
> to determine the allowable hashtable size, so a too-small sort_mem
> might discourage the planner from selecting hashed aggregation.
> Use EXPLAIN to see which query plan gets chosen.

Hi.

What exactly is "hashed aggregation"?

From Josh Berkus' email with the EXPLAIN data, it still looks like
supporting indexes aren't used, so are you still scanning the table?

--
+------------------------------------------------------------+
| Ron Johnson, Jr.     mailto:ron.l.johnson@cox.net          |
| Jefferson, LA  USA   http://members.cox.net/ron.l.johnson  |
|                                                            |
| "they love our milk and honey, but preach about another    |
|  way of living"                                            |
|    Merle Haggard, "The Fighting Side Of Me"                |
+------------------------------------------------------------+


Re: Speeding up aggregates

От
Joe Conway
Дата:
Tom Lane wrote:
> FWIW, I've implemented hashed aggregation in CVS tip.  I have not had
> the time to try to benchmark it, but I'd be interested if anyone can
> run some tests on 7.4devel.  Eliminating the need for a SORT step
> should help aggregations over large datasets.
>
> Note that even though there's no SORT, the sort_mem setting is used
> to determine the allowable hashtable size, so a too-small sort_mem
> might discourage the planner from selecting hashed aggregation.
> Use EXPLAIN to see which query plan gets chosen.
>

Here's some tests on a reasonable sized (and real life as opposed to
contrived) dataset:

parts=# set enable_hashagg to off;
SET
parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id;
                                                             QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------
  GroupAggregate  (cost=11111.93..11744.90 rows=35528 width=36) (actual
time=2799.40..3140.17 rows=34575 loops=1)
    ->  Sort  (cost=11111.93..11293.31 rows=72553 width=36) (actual
time=2799.35..2896.43 rows=72548 loops=1)
          Sort Key: i.part_id
          ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
time=157.72..1231.01 rows=72548 loops=1)
                Hash Cond: ("outer".part_id = "inner".part_id)
                ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..286.80 rows=72553 loops=1)
                ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=157.50..157.50 rows=0 loops=1)
                      ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.02..88.00 rows=35528 loops=1)
  Total runtime: 3168.73 msec
(9 rows)

parts=# set enable_hashagg to on;
SET
parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id;
                                                          QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=5617.22..5706.04 rows=35528 width=36) (actual
time=1507.89..1608.32 rows=34575 loops=1)
    ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
time=153.46..1231.34 rows=72548 loops=1)
          Hash Cond: ("outer".part_id = "inner".part_id)
          ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553 width=22)
(actual time=0.01..274.74 rows=72553 loops=1)
          ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=153.21..153.21 rows=0 loops=1)
                ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.03..84.67 rows=35528 loops=1)
  Total runtime: 1661.53 msec
(7 rows)

parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
                                                             QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------
  GroupAggregate  (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2823.65..3263.16 rows=4189 loops=1)
    Filter: (sum(qty_oh) > 0::double precision)
    ->  Sort  (cost=11111.93..11293.31 rows=72553 width=36) (actual
time=2823.40..2926.07 rows=72548 loops=1)
          Sort Key: i.part_id
          ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
time=156.39..1240.61 rows=72548 loops=1)
                Hash Cond: ("outer".part_id = "inner".part_id)
                ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..290.47 rows=72553 loops=1)
                ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=156.16..156.16 rows=0 loops=1)
                      ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.02..86.95 rows=35528 loops=1)
  Total runtime: 3282.27 msec
(10 rows)


Note that similar to Josh, I saw a nice improvement when using the
HashAggregate on the simpler case, but as soon as I added a HAVING clause the
optimizer switched back to GroupAggregate.

I'll try to play around with this a bit more later today.

Joe


Re: Speeding up aggregates

От
Joe Conway
Дата:
Tom Lane wrote:
> Note that even though there's no SORT, the sort_mem setting is used
> to determine the allowable hashtable size, so a too-small sort_mem
> might discourage the planner from selecting hashed aggregation.
> Use EXPLAIN to see which query plan gets chosen.
>

Just to follow up on my last post, I did indeed find that bumping up sort_mem
caused a switch back to HashAggregate, and a big improvement:

parts=# show sort_mem ;
  sort_mem
----------
  8192
(1 row)

parts=# set sort_mem to 32000;
SET
parts=# show sort_mem ;
  sort_mem
----------
  32000
(1 row)

parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
                                                          QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=5254.46..5432.10 rows=35528 width=36) (actual
time=1286.89..1399.36 rows=4189 loops=1)
    Filter: (sum(qty_oh) > 0::double precision)
    ->  Hash Join  (cost=1319.10..4710.31 rows=72553 width=36) (actual
time=163.36..947.54 rows=72548 loops=1)
          Hash Cond: ("outer".part_id = "inner".part_id)
          ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553 width=22)
(actual time=0.01..266.20 rows=72553 loops=1)
          ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=162.70..162.70 rows=0 loops=1)
                ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.04..88.98 rows=35528 loops=1)
  Total runtime: 1443.93 msec
(8 rows)

parts=# set sort_mem to 8192;
SET
parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
                                                             QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------
  GroupAggregate  (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2836.98..3261.66 rows=4189 loops=1)
    Filter: (sum(qty_oh) > 0::double precision)
    ->  Sort  (cost=11111.93..11293.31 rows=72553 width=36) (actual
time=2836.73..2937.78 rows=72548 loops=1)
          Sort Key: i.part_id
          ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
time=155.42..1258.40 rows=72548 loops=1)
                Hash Cond: ("outer".part_id = "inner".part_id)
                ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..308.57 rows=72553 loops=1)
                ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=155.19..155.19 rows=0 loops=1)
                      ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.02..86.82 rows=35528 loops=1)
  Total runtime: 3281.75 msec
(10 rows)


So when it gets used, HashAggregate has provided a factor of two improvement
on this test case at least. Nice work, Tom!

Joe


Re: Speeding up aggregates

От
Hannu Krosing
Дата:
On Sun, 2002-12-08 at 19:31, Joe Conway wrote:

> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
> i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
>                                                              QUERY PLAN
...
>   Total runtime: 3282.27 msec
> (10 rows)
>
>
> Note that similar to Josh, I saw a nice improvement when using the
> HashAggregate on the simpler case, but as soon as I added a HAVING clause the
> optimizer switched back to GroupAggregate.
>
> I'll try to play around with this a bit more later today.

Try turning the having into subquery + where:

explain analyze
select * from (
    select i.part_id, sum(w.qty_oh) as total_oh
      from inv i, iwhs w
     where i.part_id = w.part_id
     group by i.part_id) sub
where total_oh > 0;

--
Hannu Krosing <hannu@tm.ee>

Re: Speeding up aggregates

От
Bruce Momjian
Дата:
Added.

---------------------------------------------------------------------------

Hannu Krosing wrote:
> Hannu Krosing kirjutas L, 07.12.2002 kell 02:32:
> > Tom Lane kirjutas L, 07.12.2002 kell 01:46:
> > > "Josh Berkus" <josh@agliodbs.com> writes:
> > > > What have other Postgres users done to speed up aggregates on large
> > > > tables?
> > >
> > > FWIW, I've implemented hashed aggregation in CVS tip.
> >
> > Great!
> >
> > This should also make it easier to implement all kinds of GROUP BY
> > ROLLUP|CUBE|GROUPING SETS|() queries.
>
> Of these only ROLLUP can be done in one scan after sort, all others
> would generally require several scans without hashing.
>
>
> I just noticed that we don't even have a TODO for this. I think this
> would be a good TODO item.
>
> Bruce, could you add:
>
> * Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY
>
>
> They are all defined in SQL99 p.79 <group by clause>
>
>
> Some more background info (from a quick Google search)
>
> a very short overview:
>   http://www.neddo.com/dm3e/sql3&olap.html
>
>
> more thorough guide for DB2:
> http://www.student.math.uwaterloo.ca/~cs448/db2_doc/html/db2s0/frame3.htm#db2s0279
>
>
> -----------------
> Hannu
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: Speeding up aggregates

От
Joe Conway
Дата:
Hannu Krosing wrote:
> On Sun, 2002-12-08 at 19:31, Joe Conway wrote:
>
>
>>parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
>>i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
>>                                                             QUERY PLAN
>
> ...
>
>>  Total runtime: 3282.27 msec
>>(10 rows)
>>
>>
>>Note that similar to Josh, I saw a nice improvement when using the
>>HashAggregate on the simpler case, but as soon as I added a HAVING clause the
>>optimizer switched back to GroupAggregate.
>>
>>I'll try to play around with this a bit more later today.
>
>
> Try turning the having into subquery + where:
>
> explain analyze
> select * from (
>     select i.part_id, sum(w.qty_oh) as total_oh
>       from inv i, iwhs w
>      where i.part_id = w.part_id
>      group by i.part_id) sub
> where total_oh > 0;
>

Pretty much the same result. See below.

Joe

======================================
parts=# set sort_mem to 8000;
SET
parts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as
total_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id)
sub where total_oh > 0;
                                                                QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------
  Subquery Scan sub  (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2779.16..3212.46 rows=4189 loops=1)
    ->  GroupAggregate  (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2779.15..3202.97 rows=4189 loops=1)
          Filter: (sum(qty_oh) > 0::double precision)
          ->  Sort  (cost=11111.93..11293.31 rows=72553 width=36) (actual
time=2778.90..2878.33 rows=72548 loops=1)
                Sort Key: i.part_id
                ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36)
(actual time=155.80..1235.32 rows=72548 loops=1)
                      Hash Cond: ("outer".part_id = "inner".part_id)
                      ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..282.38 rows=72553 loops=1)
                      ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14)
(actual time=155.56..155.56 rows=0 loops=1)
                            ->  Seq Scan on inv i  (cost=0.00..1230.28
rows=35528 width=14) (actual time=0.02..86.69 rows=35528 loops=1)
  Total runtime: 3232.84 msec
(11 rows)

parts=# set sort_mem to 12000;
SET
parts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as
total_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id)
sub where total_oh > 0;
                                                             QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------
  Subquery Scan sub  (cost=5617.22..5794.86 rows=35528 width=36) (actual
time=1439.24..1565.47 rows=4189 loops=1)
    ->  HashAggregate  (cost=5617.22..5794.86 rows=35528 width=36) (actual
time=1439.23..1555.65 rows=4189 loops=1)
          Filter: (sum(qty_oh) > 0::double precision)
          ->  Hash Join  (cost=1319.10..5073.07 rows=72553 width=36) (actual
time=159.39..1098.30 rows=72548 loops=1)
                Hash Cond: ("outer".part_id = "inner".part_id)
                ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..259.48 rows=72553 loops=1)
                ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=159.11..159.11 rows=0 loops=1)
                      ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.03..87.74 rows=35528 loops=1)
  Total runtime: 1609.91 msec
(9 rows)



Re: Speeding up aggregates

От
Tom Lane
Дата:
Joe Conway <mail@joeconway.com> writes:
> Just to follow up on my last post, I did indeed find that bumping up sort_mem
> caused a switch back to HashAggregate, and a big improvement:

> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
> i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
>                                                           QUERY PLAN
>
----------------------------------------------------------------------------------------------------------------------------
>   HashAggregate  (cost=5254.46..5432.10 rows=35528 width=36) (actual
> time=1286.89..1399.36 rows=4189 loops=1)
>     Filter: (sum(qty_oh) > 0::double precision)
>     ->  Hash Join  (cost=1319.10..4710.31 rows=72553 width=36) (actual
> time=163.36..947.54 rows=72548 loops=1)

How many rows out if you drop the HAVING clause?

The planner's choice of which to use is dependent on its estimate of the
required hashtable size, which is proportional to its guess about how
many distinct groups there will be.  The above output doesn't tell us
that however, only how many groups passed the HAVING clause.  I'm
curious about the quality of this estimate, since the code to try to
generate not-completely-bogus group count estimates is all new ...

            regards, tom lane

Re: Speeding up aggregates

От
Joe Conway
Дата:
Tom Lane wrote:
> Joe Conway <mail@joeconway.com> writes:
>
>>Just to follow up on my last post, I did indeed find that bumping up sort_mem
>>caused a switch back to HashAggregate, and a big improvement:
>
>
>>parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
>>i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
>>                                                          QUERY PLAN

>>----------------------------------------------------------------------------------------------------------------------------
>>  HashAggregate  (cost=5254.46..5432.10 rows=35528 width=36) (actual
>>time=1286.89..1399.36 rows=4189 loops=1)
>>    Filter: (sum(qty_oh) > 0::double precision)
>>    ->  Hash Join  (cost=1319.10..4710.31 rows=72553 width=36) (actual
>>time=163.36..947.54 rows=72548 loops=1)
>
>
> How many rows out if you drop the HAVING clause?

parts=# set sort_mem to 8000;
SET
parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
i, iwhs w where i.part_id = w.part_id group by i.part_id;
                                                          QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=5617.22..5706.04 rows=35528 width=36) (actual
time=1525.93..1627.41 rows=34575 loops=1)
    ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
time=156.86..1248.73 rows=72548 loops=1)
          Hash Cond: ("outer".part_id = "inner".part_id)
          ->  Seq Scan on iwhs w  (cost=0.00..2121.53 rows=72553 width=22)
(actual time=0.01..274.00 rows=72553 loops=1)
          ->  Hash  (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=156.65..156.65 rows=0 loops=1)
                ->  Seq Scan on inv i  (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.03..86.86 rows=35528 loops=1)
  Total runtime: 1680.86 msec
(7 rows)


> The planner's choice of which to use is dependent on its estimate of the
> required hashtable size, which is proportional to its guess about how
> many distinct groups there will be.  The above output doesn't tell us
> that however, only how many groups passed the HAVING clause.  I'm
> curious about the quality of this estimate, since the code to try to
> generate not-completely-bogus group count estimates is all new ...

If I'm reading it correctly, it looks like the estimate in this case is pretty
good.

Joe



Re: Speeding up aggregates

От
Tom Lane
Дата:
Joe Conway <mail@joeconway.com> writes:
>> How many rows out if you drop the HAVING clause?

> parts=# set sort_mem to 8000;
> SET
> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
> i, iwhs w where i.part_id = w.part_id group by i.part_id;
>                                                           QUERY PLAN
>
----------------------------------------------------------------------------------------------------------------------------
>   HashAggregate  (cost=5617.22..5706.04 rows=35528 width=36) (actual
> time=1525.93..1627.41 rows=34575 loops=1)
>     ->  Hash Join  (cost=1319.10..5254.45 rows=72553 width=36) (actual
> time=156.86..1248.73 rows=72548 loops=1)


>> The planner's choice of which to use is dependent on its estimate of the
>> required hashtable size, which is proportional to its guess about how
>> many distinct groups there will be.  The above output doesn't tell us
>> that however, only how many groups passed the HAVING clause.  I'm
>> curious about the quality of this estimate, since the code to try to
>> generate not-completely-bogus group count estimates is all new ...

> If I'm reading it correctly, it looks like the estimate in this case is pretty
> good.

Better than I had any right to expect ;-).  Thanks.

            regards, tom lane