I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help
here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...
I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.
What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?
Any help gratfully received.
Peter Watling
New Zealand
transMET-MGU=# explain select * from pwdelete_temppaths;
QUERY PLAN
-----------------------------------------------------------------------
Seq Scan on pwdelete_temppaths (cost=0.00..11.40 rows=140 width=515)
(1 row)
transMET-MGU=# explain select * from dataprocessor_path;
QUERY PLAN
---------------------------------------------------------------------------
Seq Scan on dataprocessor_path (cost=0.00..6900.17 rows=284617 width=92)
(1 row)