Обсуждение: Question about (probably wrong) index scan cost for conditional indexes

Поиск
Список
Период
Сортировка

Question about (probably wrong) index scan cost for conditional indexes

От
Maxim Boguk
Дата:
I not sure it is bug or just planner work that way.
Postgresql 9.1.2 on Linux.

But it seems that index scan cost for very narrow/selective conditional indexes is greatly overestimated at least in some cases.

In my case I have an special conditional index like:

"news_dtime_in208section_active_key2" btree (dtime) WHERE status = 1 AND class::text = 'Sports::News'::text AND sections && '{208}'::integer[]

And query:

db=# EXPLAIN ANALYZE select * from news  where  (status = 1)  and  (class = 'Sports::News')  and  (sections && '{208}')  order by dtime  limit 10;
                                                                       QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..26.38 rows=10 width=1262) (actual time=0.040..0.082 rows=10 loops=1)
   ->  Index Scan using news_dtime_in208section_active_key2 on news  (cost=0.00..1429.55 rows=542 width=1262) (actual time=0.038..0.073 rows=10 loops=1)
 Total runtime: 0.142 ms
(3 rows)

I see no reasons why cost of that query that high... i think it should be very close equvalent in cost of query:

    "news_pkey" PRIMARY KEY, btree (id)

db=# EXPLAIN ANALYZE select * from news  order by id limit 10;
                                                            QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..0.33 rows=10 width=1262) (actual time=0.043..0.085 rows=10 loops=1)
   ->  Index Scan using news_pkey on news  (cost=0.00..25944.34 rows=775090 width=1262) (actual time=0.041..0.077 rows=10 loops=1)
 Total runtime: 0.147 ms
(3 rows)

(and if you compare real execution times - they are same but cost is different by 2 orders).
No changes of costing setting have an effect that difference.


That problem leads to switching to very slow plan for medium limits:

db=# EXPLAIN ANALYZE select * from news  where  (status = 1)  and  (class = 'Sports::News')  and  (sections && '{208}')  order by dtime  limit 40;
                                                                       QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=91.97..92.07 rows=40 width=1262) (actual time=630.865..630.889 rows=40 loops=1)
   ->  Sort  (cost=91.97..93.32 rows=542 width=1262) (actual time=630.862..630.872 rows=40 loops=1)
         Sort Key: dtime
         Sort Method: top-N heapsort  Memory: 89kB
         ->  Bitmap Heap Scan on news  (cost=6.18..74.83 rows=542 width=1262) (actual time=145.816..412.254 rows=262432 loops=1)
               Recheck Cond: ((sections && '{208}'::integer[]) AND (status = 1) AND ((class)::text = 'Sports::News'::text))
               ->  Bitmap Index Scan on news_sections_gin2_special  (cost=0.00..6.05 rows=542 width=0) (actual time=98.954..98.954 rows=262754 loops=1)
                     Index Cond: (sections && '{208}'::integer[])
 Total runtime: 632.049 ms
(9 rows)

Kind regards,
Maksym

--
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: Question about (probably wrong) index scan cost for conditional indexes

От
Tom Lane
Дата:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> But it seems that index scan cost for very narrow/selective conditional
> indexes is greatly overestimated at least in some cases.

I realized in connection with
http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
that btcostestimate is not correctly estimating numIndexTuples for
partial indexes.  But it's impossible to tell from this amount of
information whether you're seeing an effect of that, or something else.
Can you provide a self-contained test case?

            regards, tom lane

Re: Question about (probably wrong) index scan cost for conditional indexes

От
Maxim Boguk
Дата:

On Mon, Jan 23, 2012 at 11:28 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> But it seems that index scan cost for very narrow/selective conditional
> indexes is greatly overestimated at least in some cases.

I realized in connection with
http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
that btcostestimate is not correctly estimating numIndexTuples for
partial indexes.  But it's impossible to tell from this amount of
information whether you're seeing an effect of that, or something else.
Can you provide a self-contained test case?

                       regards, tom lane

Prorably simpliest test case:

set random_page_cost to 4;
set seq_page_cost to 1;
drop table  if exists test;
CREATE TABLE test (id integer primary key, value1 float, value2 float, value3 float, value4 float);
INSERT into test select id,random() as value1,random() as value2, random() as value3,random() as value4 from generate_series(1,1000000) as g(id);
CREATE INDEX test_special_key on test(value1) where value2*2<0.01 and value3*2<0.01 and value4*2<0.01;
ANALYZE test;

postgres=# EXPLAIN ANALYZE select * from test order by id limit 100;
                                                            QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..3.43 rows=100 width=36) (actual time=0.042..0.170 rows=100 loops=1)
   ->  Index Scan using test_pkey on test  (cost=0.00..34317.36 rows=1000000 width=36) (actual time=0.040..0.108 rows=100 loops=1)
 Total runtime: 0.243 ms
(3 rows)

vs

postgres=# EXPLAIN ANALYZE select * from test where value2*2<0.01 and value3*2<0.01 and value4*2<0.01 order by value1 limit 100;
                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..92.52 rows=100 width=36) (actual time=0.072..0.072 rows=0 loops=1)
   ->  Index Scan using test_special_key on test  (cost=0.00..34264.97 rows=37037 width=36) (actual time=0.070..0.070 rows=0 loops=1)
 Total runtime: 0.113 ms
(3 rows)

cost difference:
(cost=0.00..3.43 rows=100 width=36)
vs
(cost=0.00..92.52 rows=100 width=36)

An actual speed (and theoretical performance) almost same.

More selective conditions added to conditional index - worse situation with wrong costing.

Kind Regards,
Maksym

--
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: Question about (probably wrong) index scan cost for conditional indexes

От
Maxim Boguk
Дата:
Hi.

Seems previous test case not clear demonstrate the problem which i have stuck with.

Now much better and close to reality test case:

Preparation:
set random_page_cost to 4;
set seq_page_cost to 1;

create table test (id integer primary key, sections integer[], value float);
insert into test select id, ('{'||((random()*10)::integer)||'}')::integer[] as value, random() as value from generate_series(1,1000000) as g(id);
--generic gist index for array
CREATE INDEX test_sections_gist on test using gist(sections);
--specialized index on value for sections && '{2}'
CREATE INDEX test_value_in2section_key on test(value) where sections && '{2}';
analyze test;

Now actual tests:

Good query but cost definitely wrong:
postgres=# EXPLAIN ANALYZE SELECT * from test where sections && '{2}' order by value limit 100;
                                                                  QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..539.29 rows=100 width=37) (actual time=0.043..0.499 rows=100 loops=1)
   ->  Index Scan using test_value_in2section_key on test  (cost=0.00..5392.87 rows=1000 width=37) (actual time=0.040..0.434 rows=100 loops=1)
 Total runtime: 0.570 ms

Compare with almost equivalent query:
postgres=#  EXPLAIN ANALYZE SELECT * from test order by id limit 100;
                                                            QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..3.43 rows=100 width=37) (actual time=0.057..0.192 rows=100 loops=1)
   ->  Index Scan using test_pkey on test  (cost=0.00..34317.36 rows=1000000 width=37) (actual time=0.054..0.115 rows=100 loops=1)
 Total runtime: 0.258 ms

Actual speed almost same but cost differs 100 times.



Now if I increase the limit I start getting slow plans because it switch to GIST index and bitmap scan (because cost of common index scan too high):

postgres=# EXPLAIN ANALYZE SELECT * from test where sections && '{2}' order by value limit 1000;
                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=2941.68..2944.18 rows=1000 width=37) (actual time=175.301..175.766 rows=1000 loops=1)
   ->  Sort  (cost=2941.68..2944.18 rows=1000 width=37) (actual time=175.298..175.541 rows=1000 loops=1)
         Sort Key: value
         Sort Method: top-N heapsort  Memory: 127kB
         ->  Bitmap Heap Scan on test  (cost=56.48..2891.85 rows=1000 width=37) (actual time=80.230..132.479 rows=99641 loops=1)
               Recheck Cond: (sections && '{2}'::integer[])
               ->  Bitmap Index Scan on test_sections_gist  (cost=0.00..56.23 rows=1000 width=0) (actual time=78.112..78.112 rows=99641 loops=1)
                     Index Cond: (sections && '{2}'::integer[])
 Total runtime: 175.960 ms
(9 rows)

Even if I drop GIST index i'm still getting wrong plan:
postgres=# drop index test_sections_gist;
DROP INDEX
postgres=# EXPLAIN ANALYZE SELECT * from test where sections && '{2}' order by value limit 1000;
                                                                        QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=4489.88..4492.38 rows=1000 width=37) (actual time=116.637..117.088 rows=1000 loops=1)
   ->  Sort  (cost=4489.88..4492.38 rows=1000 width=37) (actual time=116.635..116.857 rows=1000 loops=1)
         Sort Key: value
         Sort Method: top-N heapsort  Memory: 127kB
         ->  Bitmap Heap Scan on test  (cost=1604.68..4440.05 rows=1000 width=37) (actual time=22.175..74.556 rows=99641 loops=1)
               Recheck Cond: (sections && '{2}'::integer[])
               ->  Bitmap Index Scan on test_value_in2section_key  (cost=0.00..1604.43 rows=1000 width=0) (actual time=20.248..20.248 rows=99641 loops=1)
 Total runtime: 117.261 ms


And only if I completely disable bitmap scan I get good fast plan (but with exceptional high cost):

postgres=# set enable_bitmapscan to 0;
SET
postgres=# EXPLAIN ANALYZE SELECT * from test where sections && '{2}' order by value limit 1000;
                                                                   QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..5392.87 rows=1000 width=37) (actual time=0.047..4.123 rows=1000 loops=1)
   ->  Index Scan using test_value_in2section_key on test  (cost=0.00..5392.87 rows=1000 width=37) (actual time=0.044..3.552 rows=1000 loops=1)
 Total runtime: 4.460 ms


I hope that test case will make my issue more clear.

Regards,
Maksym

On Mon, Jan 23, 2012 at 11:46 AM, Maxim Boguk <maxim.boguk@gmail.com> wrote:

On Mon, Jan 23, 2012 at 11:28 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> But it seems that index scan cost for very narrow/selective conditional
> indexes is greatly overestimated at least in some cases.

I realized in connection with
http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
that btcostestimate is not correctly estimating numIndexTuples for
partial indexes.  But it's impossible to tell from this amount of
information whether you're seeing an effect of that, or something else.
Can you provide a self-contained test case?

                       regards, tom lane

Prorably simpliest test case:

set random_page_cost to 4;
set seq_page_cost to 1;
drop table  if exists test;
CREATE TABLE test (id integer primary key, value1 float, value2 float, value3 float, value4 float);
INSERT into test select id,random() as value1,random() as value2, random() as value3,random() as value4 from generate_series(1,1000000) as g(id);
CREATE INDEX test_special_key on test(value1) where value2*2<0.01 and value3*2<0.01 and value4*2<0.01;
ANALYZE test;

postgres=# EXPLAIN ANALYZE select * from test order by id limit 100;
                                                            QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..3.43 rows=100 width=36) (actual time=0.042..0.170 rows=100 loops=1)
   ->  Index Scan using test_pkey on test  (cost=0.00..34317.36 rows=1000000 width=36) (actual time=0.040..0.108 rows=100 loops=1)
 Total runtime: 0.243 ms
(3 rows)

vs

postgres=# EXPLAIN ANALYZE select * from test where value2*2<0.01 and value3*2<0.01 and value4*2<0.01 order by value1 limit 100;
                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..92.52 rows=100 width=36) (actual time=0.072..0.072 rows=0 loops=1)
   ->  Index Scan using test_special_key on test  (cost=0.00..34264.97 rows=37037 width=36) (actual time=0.070..0.070 rows=0 loops=1)
 Total runtime: 0.113 ms
(3 rows)

cost difference:
(cost=0.00..3.43 rows=100 width=36)
vs
(cost=0.00..92.52 rows=100 width=36)

An actual speed (and theoretical performance) almost same.

More selective conditions added to conditional index - worse situation with wrong costing.


Kind Regards,
Maksym

--
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."





--
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: Question about (probably wrong) index scan cost for conditional indexes

От
Tom Lane
Дата:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> Seems previous test case not clear demonstrate the problem which i have
> stuck with.
> Now much better and close to reality test case:

AFAICT, these behaviors all boil down to the fact that contrib/intarray
doesn't provide a real cost estimator for its && operator.  It's using
the "contsel" stub function, which provides a fixed selectivity of
0.001.  In your test case, with 1000000 rows in the table, the estimate
for the number of rows satisfying "sections && '{2}'" thus comes out to
exactly 1000.  Unfortunately, the true number is around 100000, and it's
that discrepancy that is leading to all of these bad cost estimates.

What I'd like to see done about this is for somebody to adapt the
work Jan Urbanski did on tsvector stats collection and estimation
so that it works for the anyarray operators.  It's a bit too late
to imagine that that'll get done for 9.2, but maybe for 9.3.

            regards, tom lane

Re: Question about (probably wrong) index scan cost for conditional indexes

От
Maxim Boguk
Дата:
On Mon, Jan 30, 2012 at 12:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Maxim Boguk <maxim.boguk@gmail.com> writes:
>> Seems previous test case not clear demonstrate the problem which i have
>> stuck with.
>> Now much better and close to reality test case:
>
> AFAICT, these behaviors all boil down to the fact that contrib/intarray
> doesn't provide a real cost estimator for its && operator.  It's using
> the "contsel" stub function, which provides a fixed selectivity of
> 0.001.  In your test case, with 1000000 rows in the table, the estimate
> for the number of rows satisfying "sections && '{2}'" thus comes out to
> exactly 1000.  Unfortunately, the true number is around 100000, and it's
> that discrepancy that is leading to all of these bad cost estimates.
>
> What I'd like to see done about this is for somebody to adapt the
> work Jan Urbanski did on tsvector stats collection and estimation
> so that it works for the anyarray operators.  It's a bit too late
> to imagine that that'll get done for 9.2, but maybe for 9.3.
>
>                        regards, tom lane

Hi,

Thank you very much for the answer.
I know there is issue with statistics over intarrays (it was there
very long time and sometime it's complicating things a lot).

However,  the 100x cost difference between:
SELECT * from test order by id limit 100;  (over "primary key (id)" btree index)
Limit  (cost=0.00..3.43 rows=100 width=37)
vs
SELECT * from test where sections && '{2}' order by value limit 100;
(over "test_value_in2section_key on test(value) where sections &&
'{2}'"   btree index)
Limit  (cost=0.00..539.29 rows=100 width=37)
seems wrong for me.

Both queries performs the absolutely same task: fetch 100 entries from
the table based on the ideally suitable index (no post
processing/filtering were done at all... just return 100 sorted tuples
based on single index scan).

I don't understand where 2x+ order of cost difference come from.

And even if I drop the intarray index completely, than I still have a
wrong plan (bitmap scan + sort),  because planner cost for the index
scan over conditional index 100 more the it should be.
(e.g. there is still an issue even in absence of the intarray index).

Is absence of frequency statistics over intarrays somehow linked to
the wrong planner cost estimates for conditional index scan?

King Regards,
Maxim Boguk

Re: Question about (probably wrong) index scan cost for conditional indexes

От
Tom Lane
Дата:
Maxim Boguk <maxim.boguk@gmail.com> writes:
> I know there is issue with statistics over intarrays (it was there
> very long time and sometime it's complicating things a lot).

> However,  the 100x cost difference between:
> SELECT * from test order by id limit 100;  (over "primary key (id)" btree index)
> Limit  (cost=0.00..3.43 rows=100 width=37)
> vs
> SELECT * from test where sections && '{2}' order by value limit 100;
> (over "test_value_in2section_key on test(value) where sections &&
> '{2}'"   btree index)
> Limit  (cost=0.00..539.29 rows=100 width=37)
> seems wrong for me.

[ shrug... ]  It really is the fault of the bad rowcount estimate.
The cost estimates for the complete indexscans are reasonable.  However,
in one case you've got LIMIT thinking that it will fetch the first 100
out of 1000000 index entries, so it divides the indexscan cost estimate
by 10000, and gets something reasonably accurate.  In the other case,
LIMIT thinks it's going to fetch the first 100 out of 1000 index
entries, so it divides the indexscan cost estimate by 10, and comes out
with something not very accurate.  If the rowcount estimate for && had
been correct, those numbers would be much more alike.

> Both queries performs the absolutely same task: fetch 100 entries from
> the table based on the ideally suitable index (no post
> processing/filtering were done at all... just return 100 sorted tuples
> based on single index scan).

Well, you know that and I know that, but the exposed cost and rowcount
estimates for the IndexScan plan node imply something else entirely:
that the cost-per-tuple-fetched is a lot higher in the one case than
the other.  The LIMIT estimator has no reason, or method, to second
guess that.

> And even if I drop the intarray index completely, than I still have a
> wrong plan (bitmap scan + sort),  because planner cost for the index
> scan over conditional index 100 more the it should be.
> (e.g. there is still an issue even in absence of the intarray index).

Yeah, because it's not about the index, it's about the selectivity of
the && operator.  That estimate is wrong regardless of whether there
are any indexes involved.

> Is absence of frequency statistics over intarrays somehow linked to
> the wrong planner cost estimates for conditional index scan?

Well, we lack both the statistics and an operator selectivity function
that would know what to do with them.  Just a small matter of
programming ...

            regards, tom lane

Re: Question about (probably wrong) index scan cost for conditional indexes

От
Alex Lai
Дата:
Tom Lane wrote:
> Maxim Boguk <maxim.boguk@gmail.com> writes:
>
>> I know there is issue with statistics over intarrays (it was there
>> very long time and sometime it's complicating things a lot).
>>
>
>
>> However,  the 100x cost difference between:
>> SELECT * from test order by id limit 100;  (over "primary key (id)" btree index)
>> Limit  (cost=0.00..3.43 rows=100 width=37)
>> vs
>> SELECT * from test where sections && '{2}' order by value limit 100;
>> (over "test_value_in2section_key on test(value) where sections &&
>> '{2}'"   btree index)
>> Limit  (cost=0.00..539.29 rows=100 width=37)
>> seems wrong for me.
>>
>
> [ shrug... ]  It really is the fault of the bad rowcount estimate.
> The cost estimates for the complete indexscans are reasonable.  However,
> in one case you've got LIMIT thinking that it will fetch the first 100
> out of 1000000 index entries, so it divides the indexscan cost estimate
> by 10000, and gets something reasonably accurate.  In the other case,
> LIMIT thinks it's going to fetch the first 100 out of 1000 index
> entries, so it divides the indexscan cost estimate by 10, and comes out
> with something not very accurate.  If the rowcount estimate for && had
> been correct, those numbers would be much more alike.
>
>
>> Both queries performs the absolutely same task: fetch 100 entries from
>> the table based on the ideally suitable index (no post
>> processing/filtering were done at all... just return 100 sorted tuples
>> based on single index scan).
>>
>
> Well, you know that and I know that, but the exposed cost and rowcount
> estimates for the IndexScan plan node imply something else entirely:
> that the cost-per-tuple-fetched is a lot higher in the one case than
> the other.  The LIMIT estimator has no reason, or method, to second
> guess that.
>
>
>> And even if I drop the intarray index completely, than I still have a
>> wrong plan (bitmap scan + sort),  because planner cost for the index
>> scan over conditional index 100 more the it should be.
>> (e.g. there is still an issue even in absence of the intarray index).
>>
>
> Yeah, because it's not about the index, it's about the selectivity of
> the && operator.  That estimate is wrong regardless of whether there
> are any indexes involved.
>
>
>> Is absence of frequency statistics over intarrays somehow linked to
>> the wrong planner cost estimates for conditional index scan?
>>
>
> Well, we lack both the statistics and an operator selectivity function
> that would know what to do with them.  Just a small matter of
> programming ...
>
>             regards, tom lane
>
>
Tom,

I had the same situation in one of my query.
Use the subquery can speed up almost by 100 times faster.

explain analyse
select FileId as "FileId", ESDT as "ESDT",1 as "Position" from
V_FileMeta_L3
           where Archiveset = 61000 and ESDT= 'ESDT123'
                     and Source = 'SOURCE1234'
                     and (
                        (StartTime between '2012-01-28
05:59:57.000000Z'::timestamp - '-135 minutes'::interval
                                      and '2012-01-28
07:41:27.000000Z'::timestamp + '100 days'::interval)
                           or
                         (EndTime between '2012-01-28
05:59:57.000000Z'::timestamp - '-135 minutes'::interval
                                     and '2012-01-28
07:41:27.000000Z'::timestamp + '100 days'::interval)
                          )
                 order by starttime
limit 1;

 Limit  (cost=0.00..15.20 rows=1 width=22) (actual time=200.048..200.048
rows=1 loops=1)
   ->  Nested Loop  (cost=0.00..117596.32 rows=7736 width=22) (actual
time=200.046..200.046 rows=1 loops=1)
         ->  Index Scan using ak_filemeta_l3_esdt_starttime_endtime on
filemeta_l3 b  (cost=0.00..77200.55 rows=7736 width=22) (actual
time=199.986..199.989
 rows=2 loops=1)
               Index Cond: ((esdt)::text = 'ROLPT'::text)
               Filter: (((source)::text = 'OMPS-NPP'::text) AND
(((starttime >= '2012-01-28 08:14:57'::timestamp without time zone) AND
(starttime <= '2012-
05-07 07:41:27'::timestamp without time zone)) OR ((endtime >=
'2012-01-28 08:14:57'::timestamp without time zone) AND (endtime <=
'2012-05-07 07:41:27'::ti
mestamp without time zone))))
         ->  Index Scan using pk_filemeta_archiveset on
filemeta_archiveset a  (cost=0.00..5.21 rows=1 width=4) (actual
time=0.025..0.025 rows=0 loops=2)
               Index Cond: ((a.fileid = b.fileid) AND (a.archiveset =
61000))
 Total runtime: 200.102 ms
(8 rows)

explain analyse
select FileId as "FileId", ESDT as "ESDT",1 as "Position" from
V_FileMeta_L3
 where FileId in (select fileid from V_FileMeta_L3
           where Archiveset = 61000 and ESDT= 'ESDT123'
                     and Source = 'SOUCE1234'
                     and (
                        (StartTime between '2012-01-28
05:59:57.000000Z'::timestamp - '-135 minutes'::interval
                                      and '2012-01-28
07:41:27.000000Z'::timestamp + '100 days'::interval)
                           or
                         (EndTime between '2012-01-28
05:59:57.000000Z'::timestamp - '-135 minutes'::interval
                                     and '2012-01-28
07:41:27.000000Z'::timestamp + '100 days'::interval)
                          )
                 order by starttime)
limit 1;

Limit  (cost=53038.95..53039.41 rows=1 width=14) (actual
time=2.502..2.502 rows=1 loops=1)
  ->  Nested Loop  (cost=53038.95..6661196.32 rows=14611464 width=14)
(actual time=2.480..2.480 rows=1 loops=1)
        ->  Nested Loop  (cost=53038.95..53716.80 rows=2549631 width=18)
(actual time=2.464..2.464 rows=1 loops=1)
              ->  HashAggregate  (cost=53038.95..53040.95 rows=200
width=4) (actual time=2.445..2.445 rows=1 loops=1)
                    ->  Sort  (cost=52923.05..52942.37 rows=7727
width=12) (actual time=2.346..2.372 rows=43 loops=1)
                          Sort Key: b.starttime
                          Sort Method:  quicksort  Memory: 27kB
                          ->  Nested Loop  (cost=209.87..52424.05
rows=7727 width=12) (actual time=0.358..2.251 rows=43 loops=1)
                                ->  Bitmap Heap Scan on filemeta_l3 b
(cost=209.87..12077.66 rows=7727 width=12) (actual time=0.262..0.685
rows=108 loops=1)
                                      Recheck Cond: ((((esdt)::text =
'ROLPT'::text) AND (starttime >= '2012-01-28 08:14:57'::timestamp
without time zone) AND (starttime <= '2012-05-07 07:41:2
7'::timestamp without time zone)) OR (((esdt)::text = 'ROLPT'::text) AND
(endtime >= '2012-01-28 08:14:57'::timestamp without time zone) AND
(endtime <= '2012-05-07 07:41:27'::timestamp without
time zone)))
                                      Filter: ((source)::text =
'OMPS-NPP'::text)
                                      ->  BitmapOr  (cost=209.87..209.87
rows=9895 width=0) (actual time=0.223..0.223 rows=0 loops=1)
                                            ->  Bitmap Index Scan on
ak_filemeta_l3_esdt_starttime_endtime  (cost=0.00..107.72 rows=4961
width=0) (actual time=0.127..0.127 rows=106 loops=1)
                                                  Index Cond:
(((esdt)::text = 'ROLPT'::text) AND (starttime >= '2012-01-28
08:14:57'::timestamp without time zone) AND (starttime <= '2012-05-0
7 07:41:27'::timestamp without time zone))
                                            ->  Bitmap Index Scan on
ak_filemeta_l3_esdt_endtime  (cost=0.00..98.29 rows=4934 width=0)
(actual time=0.093..0.093 rows=108 loops=1)
                                                  Index Cond:
(((esdt)::text = 'ROLPT'::text) AND (endtime >= '2012-01-28
08:14:57'::timestamp without time zone) AND (endtime <= '2012-05-07 07
:41:27'::timestamp without time zone))
                                ->  Index Scan using
pk_filemeta_archiveset on filemeta_archiveset a  (cost=0.00..5.21 rows=1
width=4) (actual time=0.011..0.011 rows=0 loops=108)
                                      Index Cond: ((a.fileid = b.fileid)
AND (a.archiveset = 61000))
              ->  Index Scan using pk_filemeta_l3 on filemeta_l3 b
(cost=0.00..3.37 rows=1 width=14) (actual time=0.015..0.015 rows=1 loops=1)
                    Index Cond: (b.fileid = a.fileid)
        ->  Index Scan using ak_filemeta_archiveset_fileid on
filemeta_archiveset a  (cost=0.00..2.52 rows=6 width=4) (actual
time=0.012..0.012 rows=1 loops=1)
              Index Cond: (a.fileid = b.fileid)
Total runtime: 2.711 ms

Hope this help.


Best Regards,
Alex Lai