This is subsequent to the previous issue which I am expperiencing.
In my monitoring application, I have a few tables which contain few rows
but are constantly pounded with updates.
These tables are growing at very high rates. For example, a table with
less than 4K rows, which when reloaded was about 8 Mb, is now 1.4GB! As
these tables grow, the performance of the application - which only looks
at these previously relativele small tables - is exremely slow. The
huge tables are only used to calculate some statistical values on a
nightly basis.
I took another table which just started growing and ran analyze on it.
This is the result:
INFO: analyzing "public.tblkstests"
INFO: "tblkstests": scanned 3000 of 81837 pages, containing 109 live
rows and 10512 dead rows; 109 rows in sample, 2973 estimated total rows
Total query runtime: 52702 ms.
The actual number of physical rows in this table is 3404. Row width is
361. Table size = 639 MB.
What can be causing this growth? Not vacuuming often enough? I hav
pg_autovacuum running every 60 seconds. These tables have 10-15
insert/update statements per second.
Any assistance or guidance will be deeply appreciated. I am pulling
hairs on this one.
>
>
>