Обсуждение: postgresql performance degradation over time....
Hi folks, this is my first post to this emailing list. We are using postgres-7.4 in a Server based application which requires frequent updates and inserts of the database. We have observed a substantial fall in the performance of database server over the time. It works fine for some initial days and then its performace falls... and goes on falling as the days passes by ( there is not much increase in the size of the database) I have heard that updates and deletes in any DB bring in some sort of fregmentation in the database, and we are using vaccum feature ot the postgres to defragment. We have also tweaked the performance parameters in postgresql.conf ... but its not helping. So i was wondering , how can be maintain the performace of postgres over the time ? Will taking the dump of database and then deleting all tables and then restoring it will help ?? I am just clueless, any help/pointers to some useful link would be appriciated. thanks in advance, -sunil
On Fri, Aug 26, 2005 at 22:13:04 +0530, sunil arora <arora.sunil@gmail.com> wrote: > Hi folks, > > this is my first post to this emailing list. We are using postgres-7.4 > in a Server based application which requires frequent updates and > inserts of the database. We have observed a substantial fall in the > performance of database server over the time. It works fine for some > initial days and then its performace falls... and goes on falling as > the days passes by ( there is not much increase in the size of the > database) > > I have heard that updates and deletes in any DB bring in some sort of > fregmentation in the database, and we are using vaccum feature ot the > postgres to defragment. We have also tweaked the performance > parameters in postgresql.conf ... but its not helping. Are you vacuuming the database? If you haven't been, you will probably need to do a vacuum full now to get things down to a reasonable size. You should have regularly scheduled vacuum runs to allow for reuse of deleted tuples. In 8.0 there is a contrib package that does automated vacuum scheduling. In the upcoming 8.1 release (just in beta) that feature is part of the core distribution. If you haven't already, you should read through the server administration part of the documention.
Bruno, thanks for the reply, we did run vaccum on it.. and we do it regulary to maintain its performance but its not giving the expected results. I dont know but if we delete the entire database and restore it with the dump, then things seems to improve a _LOT_. Isnt vaccum suppose to do the same task for us ?? what could be going any idea ?? tx in advance -sunil On 8/26/05, Bruno Wolff III <bruno@wolff.to> wrote: > On Fri, Aug 26, 2005 at 22:13:04 +0530, > sunil arora <arora.sunil@gmail.com> wrote: > > Hi folks, > > > > this is my first post to this emailing list. We are using postgres-7.4 > > in a Server based application which requires frequent updates and > > inserts of the database. We have observed a substantial fall in the > > performance of database server over the time. It works fine for some > > initial days and then its performace falls... and goes on falling as > > the days passes by ( there is not much increase in the size of the > > database) > > > > I have heard that updates and deletes in any DB bring in some sort of > > fregmentation in the database, and we are using vaccum feature ot the > > postgres to defragment. We have also tweaked the performance > > parameters in postgresql.conf ... but its not helping. > > Are you vacuuming the database? > > If you haven't been, you will probably need to do a vacuum full now to get > things down to a reasonable size. You should have regularly scheduled > vacuum runs to allow for reuse of deleted tuples. In 8.0 there is a contrib > package that does automated vacuum scheduling. In the upcoming 8.1 release > (just in beta) that feature is part of the core distribution. > > If you haven't already, you should read through the server administration > part of the documention. >
On Sat, Aug 27, 2005 at 18:19:54 +0530, sunil arora <arora.sunil@gmail.com> wrote: > Bruno, > thanks for the reply, > we did run vaccum on it.. and we do it regulary to maintain its > performance but its not giving the expected results. Did you do VACUUM FULL or just plain VACUUM? > I dont know but if we delete the entire database and restore it with > the dump, then things seems to improve a _LOT_. > Isnt vaccum suppose to do the same task for us ?? > what could be going any idea ?? It sounds like you have a lot of dead tuples or index bloat. I think 7.4 had the main index bloat issue fixed, but I think that it was still possible to get bloated indexes in some circumstances. So it might be worth trying to reindex the tables. Note that plain VACUUM only does the job it is supposed to if your FSM setting is large enough to handle all of the dead tuples in a table. It also doesn't move valid tuples around to allow the underlying files to be reduced to the minimum size needed. If things have gotten bad enough you want to do a VACUUM full. (Cluster can be a faster way to do this, but for only a couple of Gigs of data, it may not be worth the trouble.)
Bruno Wolff III wrote: > On Sat, Aug 27, 2005 at 18:19:54 +0530, > sunil arora <arora.sunil@gmail.com> wrote: > >>Bruno, >>thanks for the reply, >>we did run vaccum on it.. and we do it regulary to maintain its >>performance but its not giving the expected results. > > > Did you do VACUUM FULL or just plain VACUUM? > > >>I dont know but if we delete the entire database and restore it with >>the dump, then things seems to improve a _LOT_. >>Isnt vaccum suppose to do the same task for us ?? >>what could be going any idea ?? > > > It sounds like you have a lot of dead tuples or index bloat. I think 7.4 > had the main index bloat issue fixed, but I think that it was still possible > to get bloated indexes in some circumstances. So it might be worth trying > to reindex the tables. > > Note that plain VACUUM only does the job it is supposed to if your FSM > setting is large enough to handle all of the dead tuples in a table. It > also doesn't move valid tuples around to allow the underlying files to > be reduced to the minimum size needed. If things have gotten bad enough > you want to do a VACUUM full. (Cluster can be a faster way to do this, > but for only a couple of Gigs of data, it may not be worth the trouble.) > Also dont forget to run ANALYZE after your vacuum, or simply VACUUM [FULL] ANALYZE;