Re: Very big insert/join performance problem (bacula)

Поиск
Список
Период
Сортировка
От Devin Ben-Hur
Тема Re: Very big insert/join performance problem (bacula)
Дата
Msg-id 4A5E6CB5.3010300@whitepages.com
обсуждение исходный текст
Ответ на Re: Very big insert/join performance problem (bacula)  (Marc Cousin <cousinmarc@gmail.com>)
Ответы Re: Very big insert/join performance problem (bacula)  (Scott Carey <scott@richrelevance.com>)
Re: Very big insert/join performance problem (bacula)  (Marc Cousin <cousinmarc@gmail.com>)
Список pgsql-performance
Marc Cousin wrote:
> This mail contains the asked plans :
> Plan 1
> around 1 million records to insert, seq_page_cost 1, random_page_cost 4

>          ->  Hash  (cost=425486.72..425486.72 rows=16746972 width=92) (actual time=23184.196..23184.196 rows=16732049
loops=1)
>                ->  Seq Scan on path  (cost=0.00..425486.72 rows=16746972 width=92) (actual time=0.004..7318.850
rows=16732049loops=1) 

>    ->  Hash  (cost=1436976.15..1436976.15 rows=79104615 width=35) (actual time=210831.840..210831.840 rows=79094418
loops=1)
>          ->  Seq Scan on filename  (cost=0.00..1436976.15 rows=79104615 width=35) (actual time=46.324..148887.662
rows=79094418loops=1) 

This doesn't address the cost driving plan question, but I think it's a
bit puzzling that a seq scan of 17M 92-byte rows completes in 7 secs,
while a seqscan of 79M 35-byte rows takes 149secs.  It's about 4:1 row
ratio, less than 2:1 byte ratio, but a 20:1 time ratio.  Perhaps there's
some terrible bloat on filename that's not present on path?  If that seq
scan time on filename were proportionate to path this plan would
complete about two minutes faster (making it only 6 times slower instead
of 9 :).

--
-Devin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1
Следующее
От: Scott Carey
Дата:
Сообщение: Re: Poor overall performance unless regular VACUUM FULL