Re: Slow queries on big table

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Slow queries on big table
Дата
Msg-id 1736.1179518362@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Slow queries on big table  ("Tyrrill, Ed" <tyrrill_ed@emc.com>)
Список pgsql-performance
"Tyrrill, Ed" <tyrrill_ed@emc.com> writes:
>  Index Scan using backup_location_pkey on backup_location
> (cost=0.00..1475268.53 rows=412394 width=8) (actual
> time=3318.057..1196723.915 rows=2752 loops=1)
>    Index Cond: (backup_id = 1070)
>  Total runtime: 1196725.617 ms

If we take that at face value it says the indexscan is requiring 434
msec per actual row fetched.  Which is just not very credible; the worst
case should be about 1 disk seek per row fetched.  So there's something
going on that doesn't meet the eye.

What I'm wondering about is whether the table is heavily updated and
seldom vacuumed, leading to lots and lots of dead tuples being fetched
and then rejected (hence they'd not show in the actual-rows count).

The other thing that seems pretty odd is that it's not using a bitmap
scan --- for such a large estimated rowcount I'd have expected a bitmap
scan not a plain indexscan.  What do you get from EXPLAIN ANALYZE if
you force a bitmap scan?  (Set enable_indexscan off, and enable_seqscan
too if you have to.)

            regards, tom lane

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: 121+ million record table perf problems
Следующее
От: Andrew Kroeger
Дата:
Сообщение: Re: Slow queries on big table