On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros <pme@prolan.hu> wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 120 million records, because a
> cron job daily deletes those ones are older than 20 day. Autovacuum is
> on and every settings is the factory default except some unrelated ones
> (listen address, authorization). But my database is growing,
> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,
> or even 0!!!).
Sounds like you're blowing out your free space map. Things to try:
1: delete your rows in smaller batches. Like every hour delete
everything over 20 days so you don't delete them all at once one time
a day.
2: crank up max fsm pages large enough to hold all the dead tuples.
3: lower the autovacuum cost delay
4: get faster hard drives so that vacuum can keep up without causing
your system to slow to a crawl while vacuum is running.