Re: Any better plan for this query?..

Поиск
Список
Период
Сортировка
От Kenneth Marshall
Тема Re: Any better plan for this query?..
Дата
Msg-id 20090506154918.GM23405@it.is.rice.edu
обсуждение исходный текст
Ответ на Re: Any better plan for this query?..  (Dimitri <dimitrik.fr@gmail.com>)
Список pgsql-performance
On Wed, May 06, 2009 at 04:48:21PM +0200, Dimitri wrote:
> Hi Ken,
>
> yes, I may do it, but I did not expect to come into profiling initially :-)
> I expected there is just something trivial within a plan that I just
> don't know.. :-)
>
> BTW, is there already an integrated profiled within a code? or do I
> need external tools?..
>
> Rgds,
> -Dimitri

I only suggested it because it might have the effect of changing
the sequential scan on the stat table to an indexed scan.

Cheers,
Ken
>
> On 5/6/09, Kenneth Marshall <ktm@rice.edu> wrote:
> > On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:
> >> Hi,
> >>
> >> any idea if there is a more optimal execution plan possible for this
> >> query:
> >>
> >> select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as
> >> hbeg,
> >>         H.END_DATE as hend, H.NOTE as hnote
> >>          from HISTORY H, STAT S
> >>          where S.REF = H.REF_STAT
> >>          and H.REF_OBJECT = '0000000001'
> >>          order by H.HORDER ;
> >>
> >> EXPLAIN ANALYZE output on 8.4:
> >>                                                                    QUERY
> >> PLAN
> >>
------------------------------------------------------------------------------------------------------------------------------------------------
> >>  Sort  (cost=4549.75..4555.76 rows=2404 width=176) (actual
> >> time=1.341..1.343 rows=20 loops=1)
> >>    Sort Key: h.horder
> >>    Sort Method:  quicksort  Memory: 30kB
> >>    ->  Hash Join  (cost=33.50..4414.75 rows=2404 width=176) (actual
> >> time=1.200..1.232 rows=20 loops=1)
> >>          Hash Cond: (h.ref_stat = s.ref)
> >>          ->  Index Scan using history_ref_idx on history h
> >> (cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052
> >> rows=20 loops=1)
> >>                Index Cond: (ref_object = '0000000001'::bpchar)
> >>          ->  Hash  (cost=21.00..21.00 rows=1000 width=45) (actual
> >> time=1.147..1.147 rows=1000 loops=1)
> >>                ->  Seq Scan on stat s  (cost=0.00..21.00 rows=1000
> >> width=45) (actual time=0.005..0.325 rows=1000 loops=1)
> >>  Total runtime: 1.442 ms
> >> (10 rows)
> >>
> >> Table HISTORY contains 200M rows, only 20 needed
> >> Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY
> >> values.
> >>
> >> Table definitions:
> >> """""""""""""""""""""""""""""""""""""""""""""""""""
> >> create table STAT
> >> (
> >>     REF                 CHAR(3)            not null,
> >>     NAME                CHAR(40)           not null,
> >>     NUMB                INT                not null
> >> );
> >>
> >> create table HISTORY
> >> (
> >>     REF_OBJECT          CHAR(10)              not null,
> >>     HORDER              INT                   not null,
> >>     REF_STAT            CHAR(3)               not null,
> >>     BEGIN_DATE          CHAR(12)              not null,
> >>     END_DATE            CHAR(12)                      ,
> >>     NOTE                CHAR(100)
> >> );
> >>
> >> create unique index stat_ref_idx on STAT( ref );
> >> create index history_ref_idx on HISTORY( ref_object, horder );
> >> """""""""""""""""""""""""""""""""""""""""""""""""""
> >>
> >> NOTE: The same query runs 2 times faster on MySQL.
> >>
> >> Any idea?..
> >>
> >> Rgds,
> >> -Dimitri
> >>
> > Dimitri,
> >
> > Is there any chance of profiling the postgres backend to see
> > where the time is used?
> >
> > Just an idea,
> > Ken
> >
>

В списке pgsql-performance по дате отправления:

Предыдущее
От: Dimitri
Дата:
Сообщение: Re: Any better plan for this query?..
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: Transparent table partitioning in future version of PG?