Tuan --
>
> Thanks for your help.
> I create index on channel_id and data_id like your comment.
...
<...>
>
> explain analyze select d.data_id, d.table_name, d.event_type, d.row_data, d.pk_data, d.old_data, d.create_time,
d.trigger_hist_id,d.channel_id, d.transaction_id, > d.source_node_id, d.external_data, '' from sym_data d inner join
sym_data_gapg on g.status='GP' and d.data_id between g.start_id and g.end_id where
> d.channel_id='sale_transaction' order by d.data_id asc;
>
> Nested Loop (cost=0.00..1512979014.35 rows=26268463088 width=1401) (actual time=25741.704..7650979.311 rows=2764140
loops=1)
> -> Index Scan using idx_d_channel_id3 on sym_data d (cost=0.00..1781979.40 rows=3117384 width=1401) (actual
time=83.718..55126.002rows=3124631 loops=1)
> -> Index Scan using sym_data_gap_pkey on sym_data_gap g (cost=0.00..358.37 rows=8426 width=8) (actual
time=2.428..2.429rows=1 loops=3124631)
> Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))
> Filter: (g.status = 'GP'::bpchar)
> Total runtime: 7651803.073 ms
>
> But query performance don't change.
> Please help me.
Did you run an analyze on the table after building the new indexes ? The row estimates seem to be off wildly,
although that may be a symptom of something else and not related, it is worth ruling out the easily tried.
HTH,
Greg Williamson