Обсуждение: Inefficient plan selected by PostgreSQL 9.0.7

Поиск
Список
Период
Сортировка

Inefficient plan selected by PostgreSQL 9.0.7

От
Maxim Boguk
Дата:
Hi,

I got very inefficient plan for a simple query.
PostgreSQL 9.0.7 on FreeBSD, default_statistics_target=1000

Table:
Game2=# \d sb_messages
                                            Table "public.sb_messages"
       Column        |           Type           |                            Modifiers
---------------------+--------------------------+------------------------------------------------------------------
 message_id          | integer                  | not null default nextval('sb_messages_message_id_seq'::regclass)
 date                | timestamp with time zone | default now()
 from_user           | integer                  | not null
 to_user             | integer                  | not null
 type                | smallint                 | not null default 0
 visibility_status   | smallint                 | not null default 0
 not_show_on_air     | boolean                  | default false
 clan_id             | integer                  |
 priority            | numeric                  | default 0
...
Indexes:
    "sb_messages_pkey" PRIMARY KEY, btree (message_id)
...
    "sb_messages_special3_key" btree (date DESC)                WHERE (type = ANY (ARRAY[0, 9])) AND visibility_status = 0 AND not_show_on_air = false AND clan_id IS NULL
    "sb_messages_special4_key" btree (priority DESC NULLS LAST) WHERE (type = ANY (ARRAY[0, 9])) AND visibility_status = 0 AND not_show_on_air = false AND clan_id IS NULL

Now the problem query:

Game2=# EXPLAIN (ANALYZE, COSTS) SELECT * FROM sb_messages messages_tbl WHERE
   (messages_tbl.type IN (0, 9) AND messages_tbl.visibility_status=0 AND messages_tbl.not_show_on_air='f' AND messages_tbl.clan_id IS NULL)
   AND  NOT EXISTS (SELECT 1 FROM users users_tbl WHERE blocked='t' and users_tbl.id = messages_tbl.from_user)
ORDER BY priority DESC NULLS LAST
LIMIT 10;
                                                                                    QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=24576.83..24576.84 rows=1 width=206) (actual time=514.011..514.052 rows=10 loops=1)
   ->  Sort  (cost=24576.83..24576.84 rows=1 width=206) (actual time=514.008..514.021 rows=10 loops=1)
         Sort Key: messages_tbl.priority
         Sort Method:  top-N heapsort  Memory: 40kB
         ->  Nested Loop Anti Join  (cost=0.00..24576.82 rows=1 width=206) (actual time=0.043..436.386 rows=20761 loops=1)
               ->  Index Scan using sb_messages_special3_key on sb_messages messages_tbl  (cost=0.00..3804.39 rows=35924 width=206) (actual time=0.019..69.801 rows=24938 loops=1)
               ->  Index Scan using sb_users_pkey on users users_tbl  (cost=0.00..0.53 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=24938)
                     Index Cond: (users_tbl.id = messages_tbl.from_user)
                     Filter: users_tbl.blocked
 Total runtime: 514.171 ms

As can be seen, PostgreSQL prefers scan over unrelated index + sort.

Correct plan is used only after sb_messages_special3_key index were dropped:

Game2=# drop index sb_messages_special3_key;
DROP INDEX
Game2=# EXPLAIN (ANALYZE, COSTS) SELECT * FROM sb_messages messages_tbl WHERE
   (messages_tbl.type IN (0, 9) AND messages_tbl.visibility_status=0 AND messages_tbl.not_show_on_air='f' AND messages_tbl.clan_id IS NULL)
   AND  NOT EXISTS (SELECT 1 FROM users users_tbl WHERE blocked='t' and users_tbl.id = messages_tbl.from_user)
ORDER BY priority DESC NULLS LAST
LIMIT 10;
                                                                               QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..24860.54 rows=1 width=206) (actual time=0.047..0.273 rows=10 loops=1)
   ->  Nested Loop Anti Join  (cost=0.00..24860.54 rows=1 width=206) (actual time=0.045..0.245 rows=10 loops=1)
         ->  Index Scan using sb_messages_special4_key on sb_messages messages_tbl  (cost=0.00..4088.11 rows=35924 width=206) (actual time=0.021..0.051 rows=10 loops=1)
         ->  Index Scan using sb_users_pkey on users users_tbl  (cost=0.00..0.53 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=10)
               Index Cond: (users_tbl.id = messages_tbl.from_user)
               Filter: users_tbl.blocked
 Total runtime: 0.391 ms
(7 rows)


No manipulations with the *_cost settings had forced the database select a correct plan in presence of the sb_messages_special3_key index (any changes in *_costs have proportional effects on cost of the both plans and slow plan win always by cost).
As can be seen, an actual rows very close to the predicted amount (
   ->  Index Scan using sb_messages_special3_key on sb_messages messages_tbl  (cost=0.00..3804.39 rows=35924 width=206) (actual time=0.019..69.801 rows=24938 loops=1)
).

What I don't understand is that why planner prefer full scan over unrelated index and sort to the index scan over related index without sort?
Even in the worst possible case index scan over related index (sb_messages_special4_key) will read the exactly same amount of rows from the table as scan over sb_messages_special3_key.
And very likely scan over related index will win.

What I can do to fight that issue (I looking to keep both indexes on that table for fast queries with different ordering).

--
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: Inefficient plan selected by PostgreSQL 9.0.7

От
Tom Lane
Дата:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> I got very inefficient plan for a simple query.

It looks like the problem is with the estimate of the antijoin size:

>          ->  Nested Loop Anti Join  (cost=0.00..24576.82 rows=1 width=206)
> (actual time=0.043..436.386 rows=20761 loops=1)

that is, only about 20% of the rows in sb_messages are eliminated by the
NOT EXISTS condition, but the planner thinks that nearly all of them
will be (and that causes it to not think that the LIMIT is going to
affect anything, so it doesn't prefer a fast-start plan).

Since you've not told us anything about the statistics of these tables,
it's hard to speculate as to why the estimate is off.

            regards, tom lane

Re: Inefficient plan selected by PostgreSQL 9.0.7

От
Maxim Boguk
Дата:


On Wed, May 2, 2012 at 2:50 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> I got very inefficient plan for a simple query.

It looks like the problem is with the estimate of the antijoin size:

>          ->  Nested Loop Anti Join  (cost=0.00..24576.82 rows=1 width=206)
> (actual time=0.043..436.386 rows=20761 loops=1)

that is, only about 20% of the rows in sb_messages are eliminated by the
NOT EXISTS condition, but the planner thinks that nearly all of them
will be (and that causes it to not think that the LIMIT is going to
affect anything, so it doesn't prefer a fast-start plan).

Since you've not told us anything about the statistics of these tables,
it's hard to speculate as to why the estimate is off.

                       regards, tom lane


Hi,

Is there any particular stat data what I need provide except these two:

SELECT * from pg_stats where tablename='users' and attname='blocked';
-[ RECORD 1 ]-----+--------------------
schemaname        | public
tablename         | users
attname           | blocked
inherited         | f
null_frac         | 0
avg_width         | 1
n_distinct        | 2
most_common_vals  | {f,t}
most_common_freqs | {0.573007,0.426993}
histogram_bounds  |
correlation       | 0.900014

and

SELECT schemaname,tablename,attname,inherited,null_frac,avg_width,n_distinct,correlation from pg_stats where tablename='sb_messages' and attname='from_user';
-[ RECORD 1 ]------------
schemaname  | public
tablename   | sb_messages
attname     | from_user
inherited   | f
null_frac   | 0
avg_width   | 4
n_distinct  | 103473
correlation | 0.512214

(most_common_vals, most_common_freqs and histogram_bounds is very long values from default_statistics_target=1000, top most_common_freqs is only 0.00282333).

Kind Regards,
Maksym

Re: Inefficient plan selected by PostgreSQL 9.0.7

От
Maxim Boguk
Дата:


On Wed, May 2, 2012 at 2:50 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> I got very inefficient plan for a simple query.

It looks like the problem is with the estimate of the antijoin size:

>          ->  Nested Loop Anti Join  (cost=0.00..24576.82 rows=1 width=206)
> (actual time=0.043..436.386 rows=20761 loops=1)

that is, only about 20% of the rows in sb_messages are eliminated by the
NOT EXISTS condition, but the planner thinks that nearly all of them
will be (and that causes it to not think that the LIMIT is going to
affect anything, so it doesn't prefer a fast-start plan).

Since you've not told us anything about the statistics of these tables,
it's hard to speculate as to why the estimate is off.

                       regards, tom lane

Most interesting part that NOT EXISTS estimates way off, when LEFT JOIN WHERE ... IS NULL esimated correctly:

good esitmate (estimated rows=20504  vs real rows=20760):
Game2=# EXPLAIN ANALYZE
SELECT
*
FROM sb_messages messages_tbl
LEFT JOIN users users_tbl ON users_tbl.id = messages_tbl.from_user
WHERE
    messages_tbl.type IN (0, 9) AND
    messages_tbl.visibility_status = 0 AND
    messages_tbl.not_show_on_air = 'f' AND
    messages_tbl.clan_id IS NULL AND
    users_tbl.blocked IS DISTINCT FROM 't';
                                                                              QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop Left Join  (cost=0.00..24577.74 rows=20504 width=1037) (actual time=0.045..532.012 rows=20760 loops=1)
   Filter: (users_tbl.blocked IS DISTINCT FROM true)
   ->  Index Scan using sb_messages_special3_key on sb_messages messages_tbl  (cost=0.00..3793.75 rows=35784 width=208) (actual time=0.019..67.746 rows=24937 loops=1)
   ->  Index Scan using sb_users_pkey on users users_tbl  (cost=0.00..0.53 rows=1 width=829) (actual time=0.007..0.009 rows=1 loops=24937)
         Index Cond: (users_tbl.id = messages_tbl.from_user)
 Total runtime: 563.944 ms


bad estimate (estimated 1 vs real rows=20760):
Game2=# EXPLAIN (ANALYZE, COSTS) SELECT * FROM sb_messages messages_tbl WHERE
   (messages_tbl.type IN (0, 9) AND messages_tbl.visibility_status=0 AND messages_tbl.not_show_on_air='f' AND messages_tbl.clan_id IS NULL)
   AND  NOT EXISTS (SELECT 1 FROM users users_tbl WHERE blocked='t' and users_tbl.id = messages_tbl.from_user);
                                                                              QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop Anti Join  (cost=0.00..24488.28 rows=1 width=208) (actual time=0.044..430.645 rows=20760 loops=1)
   ->  Index Scan using sb_messages_special3_key on sb_messages messages_tbl  (cost=0.00..3793.75 rows=35784 width=208) (actual time=0.020..67.810 rows=24937 loops=1)
   ->  Index Scan using sb_users_pkey on users users_tbl  (cost=0.00..0.53 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=24937)
         Index Cond: (users_tbl.id = messages_tbl.from_user)
         Filter: users_tbl.blocked
 Total runtime: 461.296 ms


What is curious that not exists always perform 20% faster (I performed both explains like 10 times each and each time not exits is close to 20% faster).


--
Maxim Boguk
Senior Postgresql DBA.