Обсуждение: Optimization idea

Поиск
Список
Период
Сортировка

Optimization idea

От
Vlad Arkhipov
Дата:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.

create temp table t1 (id bigint, t bigint);

insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert into t1 values (2, 4);
insert into t1 values (3, 5);

create temp table t2 (id bigint, t bigint);

insert into t2 (id, t)
select g, 2
from generate_series(1, 200) g;

insert into t2 (id, t)
select g, 3
from generate_series(201, 300) g;

insert into t2 (id, t)
select g, 4
from generate_series(301, 400) g;

insert into t2 (id, t)
select g, 1
from generate_series(401, 100000) g;

insert into t2 (id, t)
select g, 5
from generate_series(100001, 100100) g;

create index t_idx on t2(t);

analyze t1;
analyze t2;

explain analyze
select *
from t2
  join t1 on t1.t = t2.t
where t1.t = 2

explain analyze
select *
from t2
  join t1 on t1.t = t2.t
where t1.id = 3

explain analyze
select *
from t2
where t2.t in (2, 3, 4)


These two queries are completely equal and optimizator should know it as
I see from the plans:

"Hash Join  (cost=1.09..2667.09 rows=75000 width=32) (actual
time=0.026..100.207 rows=400 loops=1)"
"  Hash Cond: (t2.t = t1.t)"
"  ->  Seq Scan on t2  (cost=0.00..1541.00 rows=100000 width=16) (actual
time=0.007..47.083 rows=100000 loops=1)"
"  ->  Hash  (cost=1.05..1.05 rows=3 width=16) (actual time=0.011..0.011
rows=3 loops=1)"
"        ->  Seq Scan on t1  (cost=0.00..1.05 rows=3 width=16) (actual
time=0.005..0.008 rows=3 loops=1)" <-- HERE IS THE PROBLEM. IF THE
ESTIMATED COUNT = 1 OPTIMIZER BUILDS THE CORRECT FAST PLAN, BUT IF THE
ESTIMATION IS GREATER THAN 1 WE HAVE A PROBLEM
"              Filter: (id = 2)"
"Total runtime: 100.417 ms"

"Nested Loop  (cost=0.00..1024.46 rows=20020 width=32) (actual
time=0.030..0.222 rows=100 loops=1)"
"  ->  Seq Scan on t1  (cost=0.00..1.05 rows=1 width=16) (actual
time=0.008..0.009 rows=1 loops=1)"
"        Filter: (id = 3)"
"  ->  Index Scan using t_idx on t2  (cost=0.00..773.16 rows=20020
width=16) (actual time=0.016..0.078 rows=100 loops=1)"
"        Index Cond: (t2.t = t1.t)"
"Total runtime: 0.296 ms"

"Bitmap Heap Scan on t2  (cost=16.09..556.80 rows=429 width=16) (actual
time=0.067..0.256 rows=400 loops=1)"
"  Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))"
"  ->  Bitmap Index Scan on t_idx  (cost=0.00..15.98 rows=429 width=0)
(actual time=0.056..0.056 rows=400 loops=1)"
"        Index Cond: (t = ANY ('{2,3,4}'::bigint[]))"
"Total runtime: 0.458 ms"

An ugly workaround is to add the column t1(t) in the table t2.

Re: Optimization idea

От
Greg Smith
Дата:
Vlad Arkhipov wrote:
> Please do this small optimization if it is possible. It seem that the
> optimizer have the all information to create a fast plan but it does
> not do that.

This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work with, the ones ANALYZE computes.
You noticed this yourself:

> HERE IS THE PROBLEM. IF THE ESTIMATED COUNT = 1 OPTIMIZER BUILDS THE
> CORRECT FAST PLAN, BUT IF THE ESTIMATION IS GREATER THAN 1 WE HAVE A
> PROBLEM

See http://www.postgresql.org/docs/current/static/planner-stats.html for
an intro to this area.

You didn't mention your PostgreSQL version.  If you're running 8.3 or
earlier, an increase to default_statistics_target might be in order to
get more data about the distribution of data in the table, to reduce the
odds of what you're seeing happening.

I can't replicate your problem on the current development 9.0; all three
plans come back with results quickly when I just tried it:

 Nested Loop  (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
   ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
time=0.013..0.016 rows=1 loops=1)
         Filter: (t = 2)
   ->  Index Scan using t_idx on t2  (cost=0.00..47.66 rows=204
width=16) (actual time=0.029..0.352 rows=200 loops=1)
         Index Cond: (t2.t = 2)
 Total runtime: 1.295 ms

 Nested Loop  (cost=0.00..1042.77 rows=20020 width=32) (actual
time=0.042..0.437 rows=100 loops=1)
   ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
time=0.013..0.015 rows=1 loops=1)
         Filter: (id = 3)
   ->  Index Scan using t_idx on t2  (cost=0.00..791.45 rows=20020
width=16) (actual time=0.022..0.164 rows=100 loops=1)
         Index Cond: (t2.t = t1.t)
 Total runtime: 0.608 ms

 Bitmap Heap Scan on t2  (cost=16.11..558.73 rows=433 width=16) (actual
time=0.095..0.674 rows=400 loops=1)
   Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))
   ->  Bitmap Index Scan on t_idx  (cost=0.00..16.00 rows=433 width=0)
(actual time=0.075..0.075 rows=400 loops=1)
         Index Cond: (t = ANY ('{2,3,4}'::bigint[]))
 Total runtime: 1.213 ms

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: Optimization idea

От
Vlad Arkhipov
Дата:
Greg Smith пишет:
> Vlad Arkhipov wrote:
>> Please do this small optimization if it is possible. It seem that the
>> optimizer have the all information to create a fast plan but it does
>> not do that.
>
> This isn't strictly an optimization problem; it's an issue with
> statistics the optimizer has to work with, the ones ANALYZE computes.
> You noticed this yourself:
>
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:

explain analyze
select *
from t2
where t2.t in (select 2 union select 3 union select 4) /* It works well
if there is only one row in the subquery */

"Hash Semi Join  (cost=0.17..2474.10 rows=60060 width=16) (actual
time=0.032..103.034 rows=400 loops=1)"
"  Hash Cond: (t2.t = (2))"
"  ->  Seq Scan on t2  (cost=0.00..1543.00 rows=100100 width=16) (actual
time=0.007..47.856 rows=100100 loops=1)"
"  ->  Hash  (cost=0.13..0.13 rows=3 width=4) (actual time=0.019..0.019
rows=3 loops=1)"
"        ->  HashAggregate  (cost=0.07..0.10 rows=3 width=0) (actual
time=0.013..0.015 rows=3 loops=1)"
"              ->  Append  (cost=0.00..0.06 rows=3 width=0) (actual
time=0.001..0.007 rows=3 loops=1)"
"                    ->  Result  (cost=0.00..0.01 rows=1 width=0)
(actual time=0.001..0.001 rows=1 loops=1)"
"                    ->  Result  (cost=0.00..0.01 rows=1 width=0)
(actual time=0.000..0.000 rows=1 loops=1)"
"                    ->  Result  (cost=0.00..0.01 rows=1 width=0)
(actual time=0.000..0.000 rows=1 loops=1)"
"Total runtime: 103.244 ms"

vs

explain analyze
select *
from t2
where t2.t in (2, 3, 4)

"Bitmap Heap Scan on t2  (cost=15.53..527.91 rows=357 width=16) (actual
time=0.068..0.255 rows=400 loops=1)"
"  Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))"
"  ->  Bitmap Index Scan on t_idx  (cost=0.00..15.44 rows=357 width=0)
(actual time=0.056..0.056 rows=400 loops=1)"
"        Index Cond: (t = ANY ('{2,3,4}'::bigint[]))"
"Total runtime: 0.445 ms"

I also tried setting columns' statistics to 10000, nothing happened.
PostgreSQL version is 8.4.2. It sounds good that there is no such issue
on PostgreSQL 9.0, i'll try it on the weekend.

Re: Optimization idea

От
Vlad Arkhipov
Дата:
Greg Smith пишет:
> I can't replicate your problem on the current development 9.0; all
> three plans come back with results quickly when I just tried it:
>
> Nested Loop  (cost=0.00..50.76 rows=204 width=32) (actual
> time=0.049..0.959 rows=200 loops=1)
>   ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
> time=0.013..0.016 rows=1 loops=1)
>         Filter: (t = 2)
>   ->  Index Scan using t_idx on t2  (cost=0.00..47.66 rows=204
> width=16) (actual time=0.029..0.352 rows=200 loops=1)
>         Index Cond: (t2.t = 2)
> Total runtime: 1.295 ms
>
> Nested Loop  (cost=0.00..1042.77 rows=20020 width=32) (actual
> time=0.042..0.437 rows=100 loops=1)
>   ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
> time=0.013..0.015 rows=1 loops=1)
>         Filter: (id = 3)
>   ->  Index Scan using t_idx on t2  (cost=0.00..791.45 rows=20020
> width=16) (actual time=0.022..0.164 rows=100 loops=1)
>         Index Cond: (t2.t = t1.t)
> Total runtime: 0.608 ms
>
> Bitmap Heap Scan on t2  (cost=16.11..558.73 rows=433 width=16) (actual
> time=0.095..0.674 rows=400 loops=1)
>   Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))
>   ->  Bitmap Index Scan on t_idx  (cost=0.00..16.00 rows=433 width=0)
> (actual time=0.075..0.075 rows=400 loops=1)
>         Index Cond: (t = ANY ('{2,3,4}'::bigint[]))
> Total runtime: 1.213 ms
>

Just noticed a mistype in the first query. Here are the correct queries:

create temp table t1 (id bigint, t bigint);

insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert into t1 values (2, 4);
insert into t1 values (3, 5);

create temp table t2 (id bigint, t bigint);

insert into t2 (id, t)
select g, 2
from generate_series(1, 200) g;

insert into t2 (id, t)
select g, 3
from generate_series(201, 300) g;

insert into t2 (id, t)
select g, 4
from generate_series(301, 400) g;

insert into t2 (id, t)
select g, 1
from generate_series(401, 100000) g;

insert into t2 (id, t)
select g, 5
from generate_series(100001, 100100) g;

create index t_idx on t2(t);

analyze t1;
analyze t2;

explain analyze
select *
from t2
 join t1 on t1.t = t2.t
where t1.id = 2;

explain analyze
select *
from t2
 join t1 on t1.t = t2.t
where t1.id = 3;

explain analyze
select *
from t2
where t2.t in (2, 3, 4);

I've just tried these queries on PostgreSQL 9.0alpha4, nothing differs
from PostgreSQL 8.4.

Re: Optimization idea

От
Robert Haas
Дата:
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
> I don't think this is just an issue with statistics, because the same
> problem arises when I try executing a query like this:

I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you should be focusing on here, looking
back to your original email, is that the plans that are actually much
faster have almost as much estimated cost as the slower one.  Since
all your data is probably fully cached, at a first cut, I might try
setting random_page_cost and seq_page_cost to 0.005 or so, and
adjusting effective_cache_size to something appropriate.

...Robert

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/23 Robert Haas <robertmhaas@gmail.com>:
> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>> I don't think this is just an issue with statistics, because the same
>> problem arises when I try executing a query like this:
>
> I'm not sure how you think this proves that it isn't a problem with
> statistics, but I think what you should be focusing on here, looking
> back to your original email, is that the plans that are actually much
> faster have almost as much estimated cost as the slower one.  Since
> all your data is probably fully cached, at a first cut, I might try
> setting random_page_cost and seq_page_cost to 0.005 or so, and
> adjusting effective_cache_size to something appropriate.

that will help worrect the situation, but the planner is loosing here I think.

>
> ...Robert
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



--
Cédric Villemain

Re: Optimization idea

От
Robert Haas
Дата:
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
<cedric.villemain.debian@gmail.com> wrote:
> 2010/4/23 Robert Haas <robertmhaas@gmail.com>:
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>
>> I'm not sure how you think this proves that it isn't a problem with
>> statistics, but I think what you should be focusing on here, looking
>> back to your original email, is that the plans that are actually much
>> faster have almost as much estimated cost as the slower one.  Since
>> all your data is probably fully cached, at a first cut, I might try
>> setting random_page_cost and seq_page_cost to 0.005 or so, and
>> adjusting effective_cache_size to something appropriate.
>
> that will help worrect the situation, but the planner is loosing here I think.

Well, what do you think the planner should do differently?

...Robert

Re: Optimization idea

От
"Kevin Grittner"
Дата:
Cédric Villemain<cedric.villemain.debian@gmail.com> wrote:
> 2010/4/23 Robert Haas <robertmhaas@gmail.com>:

>> Since all your data is probably fully cached, at a first cut, I
>> might try setting random_page_cost and seq_page_cost to 0.005 or
>> so, and adjusting effective_cache_size to something appropriate.
>
> that will help worrect the situation, but the planner is loosing
> here I think.

The planner produces a lot of possible plans to produce the
requested results, and then calculates a cost for each.  The lowest
cost plan which will produce the correct results is the one chosen.
If your costing factors don't represent the reality of your
environment, it won't pick the best plan for your environment.

-Kevin

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/23 Robert Haas <robertmhaas@gmail.com>:
> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
> <cedric.villemain.debian@gmail.com> wrote:
>> 2010/4/23 Robert Haas <robertmhaas@gmail.com>:
>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>>>> I don't think this is just an issue with statistics, because the same
>>>> problem arises when I try executing a query like this:
>>>
>>> I'm not sure how you think this proves that it isn't a problem with
>>> statistics, but I think what you should be focusing on here, looking
>>> back to your original email, is that the plans that are actually much
>>> faster have almost as much estimated cost as the slower one.  Since
>>> all your data is probably fully cached, at a first cut, I might try
>>> setting random_page_cost and seq_page_cost to 0.005 or so, and
>>> adjusting effective_cache_size to something appropriate.
>>
>> that will help worrect the situation, but the planner is loosing here I think.
>
> Well, what do you think the planner should do differently?

Here the planner just divide the number of rows in the t2 table by the
number of distinct value of t1.t. this is the rows=20200 we can see in
the explains.
It seems it is normal, but it also looks to me that it can be improved.
When estimating the rowcount to just num_rows/n_distinct, it *knows*
that this is wrong because the most_common_freqs of t2.t say that of
the 99600 rows have the value 1, or less than 200 in all other case.
So in every case the planner make (perhaps good) choice, but being
sure its estimation are wrong.
I wonder if we can improve the planner here.

In this case where the number of rows is lower than the stats
target(in t1.t), perhaps the planner can improve its decision by going
a bit ahead and trying plan for each n_distinct values corresponding
in t2.t .

I haven't a very clear idea of how to do that, but it may be better if
the planner estimate if its plan is 100%(or lower, just an idea) sure
to hapen and that's fine, else  try another plan.

in this test case, if the query is :
select *
from t2
join t1 on t1.t = t2.t
where t1.id = X;

if X=1 then the planner has 20% of chance that the rowcount=99600 and
80% that rowcount=200 or less, by providing a rowcount=20200 how can
it find the good plan anyway ? Is it beter to start with bad
estimation and perhaps find a good plan, or start with estimation
which may be bad but lead to a good plan in more than XX% of the
cases.

So, currently, the planner do as expected, but can we try another
approach for those corner cases ?

>
> ...Robert
>



--
Cédric Villemain

Re: Optimization idea

От
Robert Haas
Дата:
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain
<cedric.villemain.debian@gmail.com> wrote:
> 2010/4/23 Robert Haas <robertmhaas@gmail.com>:
>> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
>> <cedric.villemain.debian@gmail.com> wrote:
>>> 2010/4/23 Robert Haas <robertmhaas@gmail.com>:
>>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>>>>> I don't think this is just an issue with statistics, because the same
>>>>> problem arises when I try executing a query like this:
>>>>
>>>> I'm not sure how you think this proves that it isn't a problem with
>>>> statistics, but I think what you should be focusing on here, looking
>>>> back to your original email, is that the plans that are actually much
>>>> faster have almost as much estimated cost as the slower one.  Since
>>>> all your data is probably fully cached, at a first cut, I might try
>>>> setting random_page_cost and seq_page_cost to 0.005 or so, and
>>>> adjusting effective_cache_size to something appropriate.
>>>
>>> that will help worrect the situation, but the planner is loosing here I think.
>>
>> Well, what do you think the planner should do differently?
>
> Here the planner just divide the number of rows in the t2 table by the
> number of distinct value of t1.t. this is the rows=20200 we can see in
> the explains.
> It seems it is normal, but it also looks to me that it can be improved.
> When estimating the rowcount to just num_rows/n_distinct, it *knows*
> that this is wrong because the most_common_freqs of t2.t say that of
> the 99600 rows have the value 1, or less than 200 in all other case.
> So in every case the planner make (perhaps good) choice, but being
> sure its estimation are wrong.
> I wonder if we can improve the planner here.
>
> In this case where the number of rows is lower than the stats
> target(in t1.t), perhaps the planner can improve its decision by going
> a bit ahead and trying plan for each n_distinct values corresponding
> in t2.t .
>
> I haven't a very clear idea of how to do that, but it may be better if
> the planner estimate if its plan is 100%(or lower, just an idea) sure
> to hapen and that's fine, else  try another plan.
>
> in this test case, if the query is :
> select *
> from t2
> join t1 on t1.t = t2.t
> where t1.id = X;
>
> if X=1 then the planner has 20% of chance that the rowcount=99600 and
> 80% that rowcount=200 or less, by providing a rowcount=20200 how can
> it find the good plan anyway ? Is it beter to start with bad
> estimation and perhaps find a good plan, or start with estimation
> which may be bad but lead to a good plan in more than XX% of the
> cases.
>
> So, currently, the planner do as expected, but can we try another
> approach for those corner cases ?

Hmm.  We currently have a heuristic that we don't record a value as an
MCV unless it's more frequent than the average frequency.  When the
number of MCVs is substantially smaller than the number of distinct
values in the table this is probably a good heuristic, since it
prevents us from bothering with the recording of some values that are
probably only marginally more interesting than other values we don't
have space to record.  But if ndistinct is less than the stats target
we could in theory record every value we find in the MCVs table and
leave the histogram empty.  Not sure if that would be better in
general, or not, but it's a thought.

...Robert

Re: Optimization idea

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> Hmm.  We currently have a heuristic that we don't record a value as an
> MCV unless it's more frequent than the average frequency.  When the
> number of MCVs is substantially smaller than the number of distinct
> values in the table this is probably a good heuristic, since it
> prevents us from bothering with the recording of some values that are
> probably only marginally more interesting than other values we don't
> have space to record.  But if ndistinct is less than the stats target
> we could in theory record every value we find in the MCVs table and
> leave the histogram empty.

Which, in fact, is exactly what we do.  Cf analyze.c lines 2414ff
(as of CVS HEAD).  The heuristic you mention only gets applied after
we determine that a complete MCV list won't fit.

            regards, tom lane

Re: Optimization idea

От
Robert Haas
Дата:
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> Hmm.  We currently have a heuristic that we don't record a value as an
>> MCV unless it's more frequent than the average frequency.  When the
>> number of MCVs is substantially smaller than the number of distinct
>> values in the table this is probably a good heuristic, since it
>> prevents us from bothering with the recording of some values that are
>> probably only marginally more interesting than other values we don't
>> have space to record.  But if ndistinct is less than the stats target
>> we could in theory record every value we find in the MCVs table and
>> leave the histogram empty.
>
> Which, in fact, is exactly what we do.  Cf analyze.c lines 2414ff
> (as of CVS HEAD).  The heuristic you mention only gets applied after
> we determine that a complete MCV list won't fit.

Oh, hrmm.  I guess I need to go try to understand this example again, then.

...Robert

Re: Optimization idea

От
Vlad Arkhipov
Дата:
> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>
>> I don't think this is just an issue with statistics, because the same
>> problem arises when I try executing a query like this:
>>
>
> I'm not sure how you think this proves that it isn't a problem with
> statistics, but I think what you should be focusing on here, looking
> back to your original email, is that the plans that are actually much
> faster have almost as much estimated cost as the slower one.  Since
> all your data is probably fully cached, at a first cut, I might try
> setting random_page_cost and seq_page_cost to 0.005 or so, and
> adjusting effective_cache_size to something appropriate.
>
> ...Robert
>
>

Ok. I thougth it's quite obvious because of these two queries. I can't
understand why the estimated rows count is 40040 in the first plan.

test=# explain analyze select * from t2 join t1 on t1.t = t2.t where
t1.t in (2,3,4);
                                                    QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Hash Join  (cost=1.09..2319.87 rows=40040 width=32) (actual
time=0.050..356.269 rows=400 loops=1)
   Hash Cond: (t2.t = t1.t)
   ->  Seq Scan on t2  (cost=0.00..1543.00 rows=100100 width=16) (actual
time=0.013..176.087 rows=100100 loops=1)
   ->  Hash  (cost=1.07..1.07 rows=2 width=16) (actual time=0.023..0.023
rows=3 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 1kB
         ->  Seq Scan on t1  (cost=0.00..1.07 rows=2 width=16) (actual
time=0.006..0.014 rows=3 loops=1)
               Filter: (t = ANY ('{2,3,4}'::bigint[]))
Total runtime: 356.971 ms
(8 rows)

test=# explain analyze select * from t2 join t1 on t1.t = t2.t where
t1.t = 2 union all select * from t2 join t1 on t1.t = t2.t where t1.t =
3 union all select * from t2 join t1 on t1.t = t2.t where t1.t = 4;
                                                         QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
Append  (cost=0.00..112.42 rows=407 width=32) (actual time=0.048..3.487
rows=400 loops=1)
   ->  Nested Loop  (cost=0.00..47.51 rows=197 width=32) (actual
time=0.045..1.061 rows=200 loops=1)
         ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
time=0.011..0.014 rows=1 loops=1)
               Filter: (t = 2)
         ->  Index Scan using t_idx on t2  (cost=0.00..44.48 rows=197
width=16) (actual time=0.026..0.382 rows=200 loops=1)
               Index Cond: (pg_temp_2.t2.t = 2)
   ->  Nested Loop  (cost=0.00..32.67 rows=117 width=32) (actual
time=0.019..0.599 rows=100 loops=1)
         ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
time=0.003..0.006 rows=1 loops=1)
               Filter: (t = 3)
         ->  Index Scan using t_idx on t2  (cost=0.00..30.43 rows=117
width=16) (actual time=0.010..0.211 rows=100 loops=1)
               Index Cond: (pg_temp_2.t2.t = 3)
   ->  Nested Loop  (cost=0.00..28.17 rows=93 width=32) (actual
time=0.017..0.534 rows=100 loops=1)
         ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
time=0.005..0.008 rows=1 loops=1)
               Filter: (t = 4)
         ->  Index Scan using t_idx on t2  (cost=0.00..26.18 rows=93
width=16) (actual time=0.007..0.187 rows=100 loops=1)
               Index Cond: (pg_temp_2.t2.t = 4)
Total runtime: 4.190 ms
(17 rows)


Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/26 Vlad Arkhipov <arhipov@dc.baikal.ru>:
>
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <arhipov@dc.baikal.ru>
>> wrote:
>>
>>>
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>>
>>
>> I'm not sure how you think this proves that it isn't a problem with
>> statistics, but I think what you should be focusing on here, looking
>> back to your original email, is that the plans that are actually much
>> faster have almost as much estimated cost as the slower one.  Since
>> all your data is probably fully cached, at a first cut, I might try
>> setting random_page_cost and seq_page_cost to 0.005 or so, and
>> adjusting effective_cache_size to something appropriate.
>>
>> ...Robert
>>
>>
>
> Ok. I thougth it's quite obvious because of these two queries. I can't
> understand why the estimated rows count is 40040 in the first plan.

In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it doesn't)
So it divide the estimated number of rows in the t2 table by 5
(different values) and multiply by 2 (rows) : 40040.

In the second query the planner use a different behavior : it did
expand the value of t1.t to t2.t for each join relation and find a
costless plan. (than the one using seqscan on t2)

We are here in corner case situation where n_distinc valuest <
statistics on the column and where we might be able to improve the
planner decision. I believe I have already read something on this
topic on -hackers...

>
> test=# explain analyze select * from t2 join t1 on t1.t = t2.t where
> t1.t in (2,3,4);
>                                                   QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------
> Hash Join  (cost=1.09..2319.87 rows=40040 width=32) (actual
> time=0.050..356.269 rows=400 loops=1)
>  Hash Cond: (t2.t = t1.t)
>  ->  Seq Scan on t2  (cost=0.00..1543.00 rows=100100 width=16) (actual
> time=0.013..176.087 rows=100100 loops=1)
>  ->  Hash  (cost=1.07..1.07 rows=2 width=16) (actual time=0.023..0.023
> rows=3 loops=1)
>        Buckets: 1024  Batches: 1  Memory Usage: 1kB
>        ->  Seq Scan on t1  (cost=0.00..1.07 rows=2 width=16) (actual
> time=0.006..0.014 rows=3 loops=1)
>              Filter: (t = ANY ('{2,3,4}'::bigint[]))
> Total runtime: 356.971 ms
> (8 rows)
>
> test=# explain analyze select * from t2 join t1 on t1.t = t2.t where
> t1.t = 2 union all select * from t2 join t1 on t1.t = t2.t where t1.t =
> 3 union all select * from t2 join t1 on t1.t = t2.t where t1.t = 4;
>                                                        QUERY PLAN
>
----------------------------------------------------------------------------------------------------------------------------
> Append  (cost=0.00..112.42 rows=407 width=32) (actual time=0.048..3.487
> rows=400 loops=1)
>  ->  Nested Loop  (cost=0.00..47.51 rows=197 width=32) (actual
> time=0.045..1.061 rows=200 loops=1)
>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
> time=0.011..0.014 rows=1 loops=1)
>              Filter: (t = 2)
>        ->  Index Scan using t_idx on t2  (cost=0.00..44.48 rows=197
> width=16) (actual time=0.026..0.382 rows=200 loops=1)
>              Index Cond: (pg_temp_2.t2.t = 2)
>  ->  Nested Loop  (cost=0.00..32.67 rows=117 width=32) (actual
> time=0.019..0.599 rows=100 loops=1)
>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
> time=0.003..0.006 rows=1 loops=1)
>              Filter: (t = 3)
>        ->  Index Scan using t_idx on t2  (cost=0.00..30.43 rows=117
> width=16) (actual time=0.010..0.211 rows=100 loops=1)
>              Index Cond: (pg_temp_2.t2.t = 3)
>  ->  Nested Loop  (cost=0.00..28.17 rows=93 width=32) (actual
> time=0.017..0.534 rows=100 loops=1)
>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual
> time=0.005..0.008 rows=1 loops=1)
>              Filter: (t = 4)
>        ->  Index Scan using t_idx on t2  (cost=0.00..26.18 rows=93
> width=16) (actual time=0.007..0.187 rows=100 loops=1)
>              Index Cond: (pg_temp_2.t2.t = 4)
> Total runtime: 4.190 ms
> (17 rows)
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



--
Cédric Villemain

Re: Optimization idea

От
Robert Haas
Дата:
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
<cedric.villemain.debian@gmail.com> wrote:
> In the first query, the planner doesn't use the information of the 2,3,4.
> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
> say 3, but it doesn't)
> So it divide the estimated number of rows in the t2 table by 5
> (different values) and multiply by 2 (rows) : 40040.

I think it's doing something more complicated.  See scalararraysel().

> In the second query the planner use a different behavior : it did
> expand the value of t1.t to t2.t for each join relation and find a
> costless plan. (than the one using seqscan on t2)

I think the problem here is one we've discussed before: if the query
planner knows that something is true of x (like, say, x =
ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that
the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing
that is known to be true of x is that x is equal to some constant.
Tom doesn't think it would be worth the additional CPU time that it
would take to make these sorts of deductions.  I'm not sure I believe
that, but I haven't tried to write the code, either.

...Robert

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/28 Robert Haas <robertmhaas@gmail.com>:
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> <cedric.villemain.debian@gmail.com> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of rows in the t2 table by 5
>> (different values) and multiply by 2 (rows) : 40040.
>
> I think it's doing something more complicated.  See scalararraysel().

Thank you for driving me to the right function, Robert.
It is in fact more complicated :)

>
>> In the second query the planner use a different behavior : it did
>> expand the value of t1.t to t2.t for each join relation and find a
>> costless plan. (than the one using seqscan on t2)
>
> I think the problem here is one we've discussed before: if the query
> planner knows that something is true of x (like, say, x =
> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that
> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing
> that is known to be true of x is that x is equal to some constant.
> Tom doesn't think it would be worth the additional CPU time that it
> would take to make these sorts of deductions.  I'm not sure I believe
> that, but I haven't tried to write the code, either.

If I understand correctly, I did have some issues with
exclusion_constraint= ON for complex queries in datamining where the
planner failled to understand it must use only one partition because
the where clause where not enough 'explicit'. But it's long time ago
and I don't have my use case.

We probably need to find some real case where the planner optimisation
make sense. But I don't want usual queries to see their CPU time
increase...
<joke>Do we need real Planner Hints ?</joke>

--
Cédric Villemain

Re: Optimization idea

От
Vlad Arkhipov
Дата:
> 2010/4/28 Robert Haas <robertmhaas@gmail.com>:
>
>> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
>> <cedric.villemain.debian@gmail.com> wrote:
>>
>>> In the first query, the planner doesn't use the information of the 2,3,4.
>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>>> say 3, but it doesn't)
>>> So it divide the estimated number of rows in the t2 table by 5
>>> (different values) and multiply by 2 (rows) : 40040.
>>>
>> I think it's doing something more complicated.  See scalararraysel().
>>
>
> Thank you for driving me to the right function, Robert.
> It is in fact more complicated :)
>
>
>>> In the second query the planner use a different behavior : it did
>>> expand the value of t1.t to t2.t for each join relation and find a
>>> costless plan. (than the one using seqscan on t2)
>>>
>> I think the problem here is one we've discussed before: if the query
>> planner knows that something is true of x (like, say, x =
>> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that
>> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing
>> that is known to be true of x is that x is equal to some constant.
>> Tom doesn't think it would be worth the additional CPU time that it
>> would take to make these sorts of deductions.  I'm not sure I believe
>> that, but I haven't tried to write the code, either.
>>
>
> If I understand correctly, I did have some issues with
> exclusion_constraint= ON for complex queries in datamining where the
> planner failled to understand it must use only one partition because
> the where clause where not enough 'explicit'. But it's long time ago
> and I don't have my use case.
>
> We probably need to find some real case where the planner optimisation
> make sense. But I don't want usual queries to see their CPU time
> increase...
> <joke>Do we need real Planner Hints ?</joke>
>
>
Even if it will be done it does not solve the original issue. If I
understood you right there is now no any decent way of speeding up the query

select *
from t2
join t1 on t1.t = t2.t
where t1.id = X;

except of the propagating the t1.id value to the table t2 and createing
and index for this column? Then the query will look like

select *
from t2
where t1_id = X;


Re: Optimization idea

От
Robert Haas
Дата:
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
> Even if it will be done it does not solve the original issue. If I
> understood you right there is now no any decent way of speeding up the query
>
> select *
> from t2
> join t1 on t1.t = t2.t
> where t1.id = X;
>
> except of the propagating the t1.id value to the table t2 and createing and
> index for this column?

No, what I'm saying is that if X is any ANY() expression, you can get
a faster plan in this case by writing:

SELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.id = X;

For me this is about 8x faster.

...Robert

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/29 Robert Haas <robertmhaas@gmail.com>:
> On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov <arhipov@dc.baikal.ru> wrote:
>> Even if it will be done it does not solve the original issue. If I
>> understood you right there is now no any decent way of speeding up the query
>>
>> select *
>> from t2
>> join t1 on t1.t = t2.t
>> where t1.id = X;
>>
>> except of the propagating the t1.id value to the table t2 and createing and
>> index for this column?
>
> No, what I'm saying is that if X is any ANY() expression, you can get
> a faster plan in this case by writing:
>
> SELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.id = X;

SELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.t = X;

side note : You might want/need to improve statistics in the column
t2.t (in situation/distribution like this one)

>
> For me this is about 8x faster.
>
> ...Robert
>



--
Cédric Villemain

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/4/28 Robert Haas <robertmhaas@gmail.com>:
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> <cedric.villemain.debian@gmail.com> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of rows in the t2 table by 5
>> (different values) and multiply by 2 (rows) : 40040.
>
> I think it's doing something more complicated.  See scalararraysel().
>
>> In the second query the planner use a different behavior : it did
>> expand the value of t1.t to t2.t for each join relation and find a
>> costless plan. (than the one using seqscan on t2)
>
> I think the problem here is one we've discussed before: if the query
> planner knows that something is true of x (like, say, x =
> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that
> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing
> that is known to be true of x is that x is equal to some constant.
> Tom doesn't think it would be worth the additional CPU time that it
> would take to make these sorts of deductions.  I'm not sure I believe
> that, but I haven't tried to write the code, either.

Relative to this too :
http://archives.postgresql.org/pgsql-general/2010-05/msg00009.php  ?

>
> ...Robert
>



--
Cédric Villemain

Re: Optimization idea

От
Cédric Villemain
Дата:
2010/5/1 Cédric Villemain <cedric.villemain.debian@gmail.com>:
> 2010/4/28 Robert Haas <robertmhaas@gmail.com>:
>> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
>> <cedric.villemain.debian@gmail.com> wrote:
>>> In the first query, the planner doesn't use the information of the 2,3,4.
>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>>> say 3, but it doesn't)
>>> So it divide the estimated number of rows in the t2 table by 5
>>> (different values) and multiply by 2 (rows) : 40040.
>>
>> I think it's doing something more complicated.  See scalararraysel().
>>
>>> In the second query the planner use a different behavior : it did
>>> expand the value of t1.t to t2.t for each join relation and find a
>>> costless plan. (than the one using seqscan on t2)
>>
>> I think the problem here is one we've discussed before: if the query
>> planner knows that something is true of x (like, say, x =
>> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that
>> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing
>> that is known to be true of x is that x is equal to some constant.
>> Tom doesn't think it would be worth the additional CPU time that it
>> would take to make these sorts of deductions.  I'm not sure I believe
>> that, but I haven't tried to write the code, either.
>
> Relative to this too :
> http://archives.postgresql.org/pgsql-general/2010-05/msg00009.php  ?

not, sorry ,misread about prepared statement in the other thread ...

>
>>
>> ...Robert
>>
>
>
>
> --
> Cédric Villemain
>



--
Cédric Villemain