Re: DB slow down after table partition

Поиск
Список
Период
Сортировка
От Thom Brown
Тема Re: DB slow down after table partition
Дата
Msg-id AANLkTimkpcfRj=+n75w2qkeJnhq8EMfQZmemSSqZ1Aep@mail.gmail.com
обсуждение исходный текст
Ответ на DB slow down after table partition  (AI Rumman <rummandba@gmail.com>)
Список pgsql-performance
On 10 October 2010 11:51, AI Rumman <rummandba@gmail.com> wrote:
> I already sent the mail earlier. but added wrong explain. So I mail it
> again.
>
> I have more than 1300000 records in crm table and I partioned the table with
> deleted = 0 key.
> It is working fine except that after partioion query is taking more time
> than the previous one.
> I already set constraint_exclusion = on;
>
> I added the explain anayze for both the states.
> Any idea please why the delay is being occured.
>
> explain analyze
>  select *
>  from crm as c
>  inner join activity as a on c.crmid = a.activityid
>  inner join seactivityrel as s on c.crmid= s.crmid
>  where c.deleted = 0;
>
>
>
>  Before partiion:
>
>   QUERY PLAN
> --------------------------------------------------------------------
>  Merge Join (cost=0.00..107563.24 rows=308029 width=459) (actual
> time=13912.064..18196.713 rows=1 loops=1)
>   Merge Cond: ("outer".crmid = "inner".crmid)
>   -> Merge Join (cost=0.00..60995.18 rows=239062 width=451) (actual
> time=60.972..9698.700 rows=331563 loops=1)
>   Merge Cond: ("outer".crmid = "inner".activityid)
>   -> Index Scan using crm_pkey on crm c (cost=0.00..43559.49 rows=945968
> width=308) (actual time=52.877..6139.369 rows=949938 loops=1)
>   Filter: (deleted = 0)
>   -> Index Scan using activity_pkey on activity a (cost=0.00..11822.64
> rows=343003 width=143) (actual time=7.999..1456.232 rows=343001 loops=1)
>   -> Index Scan using seactivityrel_crmid_idx on seactivityrel s
> (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.305..6278.171
> rows=1748826 loops=1)
>  Total runtime: 18196.832 ms
>
>
>
> After partition:
>
>   QUERY PLAN
>
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual
> time=85871.145..85874.584 rows=1 loops=1)
>   Hash Cond: ("outer".crmid = "inner".activityid)
>   -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual
> time=0.167..72430.097 rows=949941 loops=1)
>   -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual
> time=0.001..0.001 rows=0 loops=1)
>   Filter: (deleted = 0)
>   -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329)
> (actual time=0.162..70604.116 rows=949941 loops=1)
>   Filter: (deleted = 0)
>   -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual
> time=9603.453..9603.453 rows=1 loops=1)
>   -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual
> time=7959.707..9603.101 rows=1 loops=1)
>   Merge Cond: ("outer".activityid = "inner".crmid)
>   -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25
> rows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)
>   -> Index Scan using seactivityrel_crmid_idx on seactivityrel s
> (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843
> rows=1748826 loops=1)
>  Total runtime: 85875.591 ms
> (13 rows)

If you look at your latest explain, it shows that it's merging the
results of a full sequential scan of both crm and crm_active.  Is
crm_active a child table of crm?

Do you no longer have the index "crm_pkey" on the parent table?  It
doesn't appear to be there anymore.  And also, if you only want
results where active = 0, create a partial index, such as:

CREATE INDEX idx_crm_inactive on crm (active) WHERE active = 0;

This would create an index for "inactive" entries on the crm table.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

В списке pgsql-performance по дате отправления:

Предыдущее
От: AI Rumman
Дата:
Сообщение: DB slow down after table partition
Следующее
От: Florian Weimer
Дата:
Сообщение: Re: large dataset with write vs read clients