Обсуждение: Nested loop performance

Поиск
Список
Период
Сортировка

Nested loop performance

От
"Nick Fankhauser"
Дата:
Hi-

I'm trying to optimize a query that I *think* should run very fast.
Essentially, I'm joining two tables that have very selective indexes and
constraining the query on an indexed field. (There's a third small lookup
table in the mix, but it doesn't really affect the bottom line.)

actor is a table containing roughly 3 million rows with an index on
actor_full_name_uppercase and a unique index on actor_id.

actor_summary also contains roughly 3 million rows. Its PK is a unique
combined index on (actor_id, county_id, case_disp_global_code).

The vast majority of the rows in actor correspond to a single row in
actor_summary I'd estimate this at 95% or more. The remaining actors with
multiple records generally have two corresponding rows in actor summary.
Actor summary was created as a performance enhancer, where we can store some
pre-calculated values such as the number of court cases an actor is involved
in.

The constraint is applied first, with reasonable speed. In the example
below, it takes about 15 seconds to gather the matches in actor.

I'm unsure what is happening next. I notice that an index scan is occurring
on actor_summary_pk, with an "actual time" of 9.15, but then it looks like a
nested loop occurs at the next level to join these tables. Does this mean
that each probe of the actor_summary index will take 9.15 msec, but the
nested loop is going to do this once for each actor_id?

The nested loop appears to be where most of my time is going, so I'm
focusing on this area, but don't know if there is a better approach to this
join.

Is there a more efficient means than a nested loop to handle such a join?
Would a different method be chosen if there was exactly one row in
actor_summary for every row in actor?

-Nick

The query & explain analyze:


alpha=#
alpha=#
alpha=# explain analyze
alpha-#     select
alpha-#     min(actor.actor_id) as actor_id,
alpha-#     min(actor.actor_entity_type) as actor_entity_type,
alpha-#     min(actor.role_class_code) as role_class_code,
alpha-#     min(actor.actor_full_name) as actor_full_name,
alpha-#     min(actor.actor_person_date_of_birth) as
actor_person_date_of_birth,
alpha-#     min(actor.actor_entity_acronym) as actor_entity_acronym,
alpha-#     min(actor.actor_person_last_name) as actor_person_last_name,
alpha-#     min(actor.actor_person_first_name) as actor_person_first_name,
alpha-#     min(actor.actor_person_middle_name) as actor_person_middle_name,
alpha-#     min(actor.actor_person_name_suffix) as actor_person_name_suffix,
alpha-#     min(actor.actor_person_place_of_birth) as
actor_person_place_of_birth,
alpha-#     min(actor.actor_person_height) as actor_person_height,
alpha-#     min(actor.actor_person_height_unit) as actor_person_height_unit,
alpha-#     min(actor.actor_person_weight) as actor_person_weight,
alpha-#     min(actor.actor_person_weight_unit) as actor_person_weight_unit,
alpha-#     min(actor.actor_person_ethnicity) as actor_person_ethnicity,
alpha-#     min(actor.actor_person_citizenship_count) as
actor_person_citizenship_count,
alpha-#     min(actor.actor_person_hair_color) as actor_person_hair_color,
alpha-#     min(actor.actor_person_scars_marks_tatto) as
actor_person_scars_marks_tatto,
alpha-#     min(actor.actor_person_marital_status) as
actor_person_marital_status,
alpha-#     min(actor.actor_alias_for_actor_id) as actor_alias_for_actor_id,
alpha-#     min(to_char(data_source.source_last_update, 'MM/DD/YYYY HH12:MI
AM TZ')) as last_update,
alpha-#     min(actor_summary.single_case_public_id) as case_public_id,
alpha-#     min(actor_summary.single_case_id) as case_id,
alpha-#     sum(actor_summary.case_count)as case_count
alpha-#   from
alpha-#     actor,
alpha-#     actor_summary,
alpha-#     data_source
alpha-#   where
alpha-#     actor.actor_id = actor_summary.actor_id
alpha-#     and data_source.source_id = actor.source_id
alpha-#     and actor_full_name_uppercase like upper('sanders%')
alpha-#   group by
alpha-#     actor.actor_id
alpha-#   order by
alpha-#     min(actor.actor_full_name_uppercase),
alpha-#     case_count desc,
alpha-#     min(actor_summary.case_disp_global_code)
alpha-#   limit
alpha-#     1000
alpha-# ;



QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-------------------------------
 Limit  (cost=2555.58..2555.59 rows=1 width=547) (actual
time=48841.76..48842.90 rows=1000 loops=1)
   ->  Sort  (cost=2555.58..2555.59 rows=1 width=547) (actual
time=48841.76..48842.18 rows=1001 loops=1)
         Sort Key: min((actor.actor_full_name_uppercase)::text),
sum(actor_summary.case_count),
min((actor_summary.case_disp_global_code)::text)
         ->  Aggregate  (cost=2555.50..2555.57 rows=1 width=547) (actual
time=48604.17..48755.28 rows=3590 loops=1)
               ->  Group  (cost=2555.50..2555.50 rows=1 width=547) (actual
time=48604.04..48647.91 rows=3594 loops=1)
                     ->  Sort  (cost=2555.50..2555.50 rows=1 width=547)
(actual time=48604.01..48605.70 rows=3594 loops=1)
                           Sort Key: actor.actor_id
                           ->  Nested Loop  (cost=1.14..2555.49 rows=1
width=547) (actual time=69.09..48585.83 rows=3594 loops=1)
                                 ->  Hash Join  (cost=1.14..900.39 rows=204
width=475) (actual time=46.92..15259.02 rows=3639 loops=1)
                                       Hash Cond: ("outer".source_id =
"inner".source_id)
                                       ->  Index Scan using
actor_full_name_uppercase on actor  (cost=0.00..895.04 rows=222 width=463)
(actual time=46.54..15220.77 rows=3639 loops=1)
                                             Index Cond:
((actor_full_name_uppercase >= 'SANDERS'::character varying) AND
(actor_full_name_uppercase < 'SANDERT'::character varying))
                                             Filter:
(actor_full_name_uppercase ~~ 'SANDERS%'::text)
                                       ->  Hash  (cost=1.11..1.11 rows=11
width=12) (actual time=0.05..0.05 rows=0 loops=1)
                                             ->  Seq Scan on data_source
(cost=0.00..1.11 rows=11 width=12) (actual time=0.02..0.04 rows=11 loops=1)
                                 ->  Index Scan using actor_summary_pk on
actor_summary  (cost=0.00..8.11 rows=1 width=72) (actual time=9.14..9.15
rows=1 loops=3639)
                                       Index Cond: ("outer".actor_id =
actor_summary.actor_id)
 Total runtime: 48851.85 msec
(18 rows)


---------------------------------------------------------------------
Nick Fankhauser

    nickf@doxpop.com  Phone 1.765.965.7363  Fax 1.765.962.9788
doxpop - Court records at your fingertips - http://www.doxpop.com/



Re: Nested loop performance

От
Richard Poole
Дата:
On Tue, Dec 16, 2003 at 12:11:59PM -0500, Nick Fankhauser wrote:
>
> I'm trying to optimize a query that I *think* should run very fast.
> Essentially, I'm joining two tables that have very selective indexes and
> constraining the query on an indexed field. (There's a third small lookup
> table in the mix, but it doesn't really affect the bottom line.)
>
> actor is a table containing roughly 3 million rows with an index on
> actor_full_name_uppercase and a unique index on actor_id.
>
> actor_summary also contains roughly 3 million rows. Its PK is a unique
> combined index on (actor_id, county_id, case_disp_global_code).

...

> I'm unsure what is happening next. I notice that an index scan is occurring
> on actor_summary_pk, with an "actual time" of 9.15, but then it looks like a
> nested loop occurs at the next level to join these tables. Does this mean
> that each probe of the actor_summary index will take 9.15 msec, but the
> nested loop is going to do this once for each actor_id?

...

> Is there a more efficient means than a nested loop to handle such a join?
> Would a different method be chosen if there was exactly one row in
> actor_summary for every row in actor?

It seems that your basic problem is that you're fetching lots of rows
from two big ol' tables. The innermost estimation mistake being made
by the planner is that the restriction on actor_full_name_uppercase
will be much more selective than it is; it thinks there will be 222
matching actors and in fact there are 3639. But being right about this
wouldn't make things a lot quicker, if it would make them quicker at
all; the index scan for them is taking about 15 seconds and presumably
a sequential scan of that table would be at least in the same ballpark.

Once it's got those rows it needs to look up matches for them in
actor_summary. Again, that's 3639 index scans of an index into a
wide-ish table; your interpretation of the 9.15 is correct. (9 ms *
3639 rows =~ 30 seconds).

It doesn't seem to me that there would be a substantially better plan
for this query with your tables as they stand. If your data were more
normalised, then your big scans might be quicker (because their rows
would be smaller so they would hit fewer disk pages), and the extra
lookups in your detail tables would only be done for the rows which
actually ended up getting returned - but that would hardly be likely
to make an order-of-magnitude difference to your overall speed.

If it were my query and I really really needed it to be considerably
faster, I'd think about hyper-normalising in the hope that my main
tables would shrink so far I could keep them in RAM effectively all
the time. The answers to your direct questions are (1) yes, (2) no,
not really, and (3) no.

Richard

Re: Nested loop performance

От
Stephan Szabo
Дата:
On Tue, 16 Dec 2003, Nick Fankhauser wrote:

> Is there a more efficient means than a nested loop to handle such a join?
> Would a different method be chosen if there was exactly one row in
> actor_summary for every row in actor?

As a question, what does explain analyze give you if you
set enable_nestloop=false; before trying the query?

Re: Nested loop performance

От
"Nick Fankhauser"
Дата:
> As a question, what does explain analyze give you if you
> set enable_nestloop=false; before trying the query?

Here are the results- It looks quite a bit more painful than the other plan,
although the wall time is in the same ballpark.

alpha=# explain analyze
alpha-#   select
alpha-#     min(actor.actor_id) as actor_id,
alpha-#     min(actor.actor_entity_type) as actor_entity_type,
alpha-#     min(actor.role_class_code) as role_class_code,
alpha-#     min(actor.actor_full_name) as actor_full_name,
alpha-#     min(actor.actor_person_date_of_birth) as
actor_person_date_of_birth,
alpha-#     min(actor.actor_entity_acronym) as actor_entity_acronym,
alpha-#     min(actor.actor_person_last_name) as actor_person_last_name,
alpha-#     min(actor.actor_person_first_name) as actor_person_first_name,
alpha-#     min(actor.actor_person_middle_name) as actor_person_middle_name,
alpha-#     min(actor.actor_person_name_suffix) as actor_person_name_suffix,
alpha-#     min(actor.actor_person_place_of_birth) as
actor_person_place_of_birth,
alpha-#     min(actor.actor_person_height) as actor_person_height,
alpha-#     min(actor.actor_person_height_unit) as actor_person_height_unit,
alpha-#     min(actor.actor_person_weight) as actor_person_weight,
alpha-#     min(actor.actor_person_weight_unit) as actor_person_weight_unit,
alpha-#     min(actor.actor_person_ethnicity) as actor_person_ethnicity,
alpha-#     min(actor.actor_person_citizenship_count) as
actor_person_citizenship_count,
alpha-#     min(actor.actor_person_hair_color) as actor_person_hair_color,
alpha-#     min(actor.actor_person_scars_marks_tatto) as
actor_person_scars_marks_tatto,
alpha-#     min(actor.actor_person_marital_status) as
actor_person_marital_status,
alpha-#     min(actor.actor_alias_for_actor_id) as actor_alias_for_actor_id,
alpha-#     min(to_char(data_source.source_last_update, 'MM/DD/YYYY HH12:MI
AM TZ')) as last_update,
alpha-#     min(actor_summary.single_case_public_id) as case_public_id,
alpha-#     min(actor_summary.single_case_id) as case_id,
alpha-#     sum(actor_summary.case_count)as case_count
alpha-#   from
alpha-#     actor,
alpha-#     actor_summary,
alpha-#     data_source
alpha-#   where
alpha-#     actor.actor_id = actor_summary.actor_id
alpha-#     and data_source.source_id = actor.source_id
alpha-#     and actor.actor_full_name_uppercase like upper('sanders%')
alpha-#   group by
alpha-#     actor.actor_id
alpha-#   order by
alpha-#     min(actor.actor_full_name_uppercase),
alpha-#     case_count desc,
alpha-#     min(actor_summary.case_disp_global_code)
alpha-#   limit
alpha-#     1000;

QUERY PLAN

----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------
----------------------
 Limit  (cost=168919.98..168920.03 rows=20 width=548) (actual
time=91247.95..91249.05 rows=1000 loops=1)
   ->  Sort  (cost=168919.98..168920.03 rows=20 width=548) (actual
time=91247.95..91248.35 rows=1001 loops=1)
         Sort Key: min((actor.actor_full_name_uppercase)::text),
sum(actor_summary.case_count),
min((actor_summary.case_disp_global_code)::text)
         ->  Aggregate  (cost=168904.95..168919.54 rows=20 width=548)
(actual time=91015.00..91164.68 rows=3590 loops=1)
               ->  Group  (cost=168904.95..168905.95 rows=201 width=548)
(actual time=90999.87..91043.25 rows=3594 loops=1)
                     ->  Sort  (cost=168904.95..168905.45 rows=201
width=548) (actual time=90999.83..91001.57 rows=3594 loops=1)
                           Sort Key: actor.actor_id
                           ->  Hash Join  (cost=903.08..168897.24 rows=201
width=548) (actual time=25470.63..90983.45 rows=3594 loops=1)
                                 Hash Cond: ("outer".actor_id =
"inner".actor_id)
                                 ->  Seq Scan on actor_summary
(cost=0.00..150715.43 rows=3455243 width=73) (actual time=8.03..52902.24
rows=3455243 loops=1)
                                 ->  Hash  (cost=902.57..902.57 rows=204
width=475) (actual time=25459.92..25459.92 rows=0 loops=1)
                                       ->  Hash Join  (cost=1.14..902.57
rows=204 width=475) (actual time=155.92..25451.25 rows=3639 loops=1)
                                             Hash Cond: ("outer".source_id =
"inner".source_id)
                                             ->  Index Scan using
actor_full_name_uppercase on actor  (cost=0.00..897.20 rows=223 width=463)
(actual time=144.93..25404.
10 rows=3639 loops=1)
                                                   Index Cond:
((actor_full_name_uppercase >= 'SANDERS'::character varying) AND
(actor_full_name_uppercase < 'SANDERT'::
character varying))
                                                   Filter:
(actor_full_name_uppercase ~~ 'SANDERS%'::text)
                                             ->  Hash  (cost=1.11..1.11
rows=11 width=12) (actual time=10.66..10.66 rows=0 loops=1)
                                                   ->  Seq Scan on
data_source  (cost=0.00..1.11 rows=11 width=12) (actual time=10.63..10.64
rows=11 loops=1)
 Total runtime: 91275.18 msec
(19 rows)

alpha=#



Re: Nested loop performance

От
"Nick Fankhauser"
Дата:
> It seems that your basic problem is that you're fetching lots of rows
> from two big ol' tables.

> It doesn't seem to me that there would be a substantially better plan
> for this query with your tables as they stand.

That's more or less the conclusion I had come to. I was just hoping someone
else could point out an approach I've been missing. (sigh!)



> If your data were more
> normalised, then your big scans might be quicker (because their rows
> would be smaller so they would hit fewer disk pages),

This started off as a 5-table join on well-normalized data. Unfortunately,
the actor table doesn't get any smaller, and the work involved in
calculating the "case_count" information on the fly was clearly becoming a
problem- particularly with actors that had a heavy caseload. (Busy attorneys
and judges.) The actor_summary approach makes these previous problem cases
go away, but the payback is that (as you correctly pointed out) queries on
average citizens who only have one case suffer from the de-normalized
approach.

We're currently considering the approach of just returning all of the rows
to our application, and doing the aggregation and limit work in the app. The
inconsistency of the data makes it very tough for the query planner to come
up with an strategy that is always a winner.

Thanks for your thoughts!

-Nick