On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <jnasby@enova.com> wrote:
explain (analyze,buffers) select min(insert_time) from cnu_stats.page_hits_raw ; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=0.12..0.13 rows=1 width=0) (actual time=119.347..119.347 rows=1 loops=1) Buffers: shared hit=1 read=9476 InitPlan 1 (returns $0) -> Limit (cost=0.00..0.12 rows=1 width=8) (actual time=119.335..119.336 rows=1 loops=1) Buffers: shared hit=1 read=9476 -> Index Scan using page_hits_raw_pkey on page_hits_raw (cost=0.00..5445004.65 rows=47165480 width=8) (actual time=119.333..119.333 rows=1 loops=1)
Index Cond: (insert_time IS NOT NULL)
Buffers: shared hit=1 read=9476 Total runtime: 119.382 ms (9 rows)
We do run a regular process to remove older rows... I thought we were vacuuming after that process but maybe not.
Btree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done. But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.