> -----Original Message-----
> From: Robert Haas [mailto:robertmhaas@gmail.com]
> Sent: 04 November 2011 5:07 PM
> To: Sorbara, Giorgio (CIOK)
> Cc: Tomas Vondra; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Strange query plan
>
> On Mon, Oct 31, 2011 at 9:52 AM, Sorbara, Giorgio (CIOK)
> <Giorgio.Sorbara@fao.org> wrote:
> > Group (cost=0.00..4674965.80 rows=200 width=17) (actual
> time=13.375..550943.592 rows=1 loops=1)
> > -> Append (cost=0.00..4360975.94 rows=125595945 width=17) (actual
> time=13.373..524324.817 rows=125595932 loops=1)
> > -> Index Scan using f_suipy_pkey on f_suipy
> (cost=0.00..5.64 rows=1 width=58) (actual time=0.019..0.019 rows=0
> loops=1)
> > Index Cond: ((fk_theme)::text =
> 'main_py_six_scxc'::text)
> > -> Seq Scan on f_suipy_main_py_six_scxc f_suipy
> (cost=0.00..4360970.30 rows=125595944 width=17) (actual
> time=13.352..495259.117 rows=125595932 loops=1)
> > Filter: ((fk_theme)::text = 'main_py_six_scxc'::text)
> > Total runtime: 550943.699 ms
>
> How fast do you expect this to run? It's aggregating 125 million
> rows, so that's going to take some time no matter how you slice it.
> Unless I'm misreading this, it's actually taking only about 4
> microseconds per row, which does not obviously suck.
Well, the problem is not how fast it takes to process one row rather the best query plan I am supposed to get. I don't
meanthe planer is wrong but I was expecting a feature is not there (yet).
We don't have pure index scan. Fair enough. so I have approached the problem in a different way: getting rid of the
degenerateddimensions and exploiting "useless" dimension table.
It's a workaround but it actually seems to work :) now I have a ~350 millions fact table and no partition but I am
happyto get the data I want in 1 sec or less.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company