On Tue, 10 Feb 2004, CSN wrote:
>
> I think that probably improves things (lower cost? -
> see my other post):
>
> explain select p.* from products p where p.id in (
> select product_id from product_categories pc where
> pc.category_id = 1016) order by p.title limit 25
> offset 0;
>
> QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------------
> Limit (cost=4282.18..4282.24 rows=25 width=290)
> -> Sort (cost=4282.18..4282.46 rows=111
> width=290)
> Sort Key: p.title
> -> Nested Loop (cost=3609.75..4278.41
> rows=111 width=290)
> -> HashAggregate
> (cost=3609.75..3609.75 rows=111 width=4)
> -> Index Scan using
> idx_pc_category_id on product_categories pc
> (cost=0.00..3607.28 rows=986 width=4)
> Index Cond: (category_id =
> 1016)
> -> Index Scan using pkey_products_id
> on products p (cost=0.00..6.01 rows=1 width=290)
> Index Cond: (p.id =
> "outer".product_id)
> (9 rows)
>
>
> I figured the limit/offset was probably causing the
> problem. What's weird is that when the same query is
> executed again, it seems much faster - some sort of
> caching maybe?>
Yep. Exactly.
What does explain analyze say about the two queries?
> (table pc is just product_id <=> category_id - I don't
> really need the category_id)
If you could eliminate the need for that table in this query you should
get it to run much faster.