Обсуждение: plain inserts and deletes very slow
Hi folks, my application reads and writes some table quite often (multiple times per second). these tables are quite small (not more than 20 tuples), but the operations take quite a long time (>300 ms!). The query operations are just include text matching (=) and date comparison (<,>). I wasn't yet able to track down, if all these queries take sucha long time or just sometimes. When running them manually or trying explain, evrything's fast. Probably there could be some side effects with other concurrent quries. Could anyone give me advice ? cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
On Mon, Jul 04, 2005 at 12:45:37AM +0200, Enrico Weigelt wrote: > my application reads and writes some table quite often > (multiple times per second). these tables are quite small > (not more than 20 tuples), but the operations take quite a > long time (>300 ms!). Are you VACUUMing often enough? /* Steinar */ -- Homepage: http://www.sesse.net/
* Enrico Weigelt <weigelt@metux.de> wrote: forgot to mention: + linux-2.6.9 + postgres-7.4.6 + intel celeron 2ghz + intel ultra ata controller + 768mb ram cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
* Steinar H. Gunderson <sgunderson@bigfoot.com> wrote: > On Mon, Jul 04, 2005 at 12:45:37AM +0200, Enrico Weigelt wrote: > > my application reads and writes some table quite often > > (multiple times per second). these tables are quite small > > (not more than 20 tuples), but the operations take quite a > > long time (>300 ms!). > > Are you VACUUMing often enough? I've just VACUUM'ed multiple times, so it's perhaps not the problem. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
Did you vacuum full? When you do lots of inserts and deletes, dead tuples get left behind. When you vacuum, postgres will reuse those dead tuples, but if you don't vacuum for a long time these tuples will build up lots. Even when you vacuum in this case, the dead tuples are still there, although they are marked for reuse. Vacuuming full actually removes the dead tuples. If you vacuum (normal) regularly, then the number of dead tuples will stay down, as they are regularly marked for reuse. David Enrico Weigelt wrote: > * Enrico Weigelt <weigelt@metux.de> wrote: > > forgot to mention: > > + linux-2.6.9 > + postgres-7.4.6 > + intel celeron 2ghz > + intel ultra ata controller > + 768mb ram > > > cu -- David Mitchell Software Engineer Telogis NOTICE: This message (including any attachments) contains CONFIDENTIAL INFORMATION intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.
* David Mitchell <david.mitchell@telogis.com> wrote: > Did you vacuum full? > > When you do lots of inserts and deletes, dead tuples get left behind. > When you vacuum, postgres will reuse those dead tuples, but if you don't > vacuum for a long time these tuples will build up lots. Even when you > vacuum in this case, the dead tuples are still there, although they are > marked for reuse. Vacuuming full actually removes the dead tuples. I'm doing a VACUUM ANALYZE every 6 hours. vacuum'ing manually doesnt seem to have any effect on that. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 minutes would be closer to your mark. Try vacuuming every 15 minutes for a start and see how that affects things (you will have to do a vacuum full to get the tables back into shape after them slowing down as they have). David Enrico Weigelt wrote: > * David Mitchell <david.mitchell@telogis.com> wrote: > >>Did you vacuum full? >> >>When you do lots of inserts and deletes, dead tuples get left behind. >>When you vacuum, postgres will reuse those dead tuples, but if you don't >>vacuum for a long time these tuples will build up lots. Even when you >>vacuum in this case, the dead tuples are still there, although they are >>marked for reuse. Vacuuming full actually removes the dead tuples. > > > I'm doing a VACUUM ANALYZE every 6 hours. > > vacuum'ing manually doesnt seem to have any effect on that. > > > cu -- David Mitchell Software Engineer Telogis NOTICE: This message (including any attachments) contains CONFIDENTIAL INFORMATION intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.
* David Mitchell <david.mitchell@telogis.com> wrote: > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 > minutes would be closer to your mark. Try vacuuming every 15 minutes for > a start and see how that affects things (you will have to do a vacuum > full to get the tables back into shape after them slowing down as they > have). hmm. I've just done vacuum full at the moment on these tables, but it doesnt seem to change anything :( cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote: > * David Mitchell <david.mitchell@telogis.com> wrote: > > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 > > minutes would be closer to your mark. Try vacuuming every 15 minutes for > > a start and see how that affects things (you will have to do a vacuum > > full to get the tables back into shape after them slowing down as they > > have). > > hmm. I've just done vacuum full at the moment on these tables, but it > doesnt seem to change anything :( Maybe you need a REINDEX, if you have indexes on that table. Try that, coupled with the frequent VACUUM suggestion. -- Alvaro Herrera (<alvherre[a]surnet.cl>) "World domination is proceeding according to plan" (Andrew Morton)
Hmm, you said you don't experience this when executing the query manually. What adapter are you using to access postgres from your application? libpq, npgsql or something else? And what is your method for running the query 'manually'. Are you running it locally or from a remote machine or what? Regards David Enrico Weigelt wrote: > * David Mitchell <david.mitchell@telogis.com> wrote: > >>Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 >>minutes would be closer to your mark. Try vacuuming every 15 minutes for >>a start and see how that affects things (you will have to do a vacuum >>full to get the tables back into shape after them slowing down as they >>have). > > > hmm. I've just done vacuum full at the moment on these tables, but it > doesnt seem to change anything :( > > > cu -- David Mitchell Software Engineer Telogis
* Alvaro Herrera <alvherre@surnet.cl> wrote: > On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote: > > * David Mitchell <david.mitchell@telogis.com> wrote: > > > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 > > > minutes would be closer to your mark. Try vacuuming every 15 minutes for > > > a start and see how that affects things (you will have to do a vacuum > > > full to get the tables back into shape after them slowing down as they > > > have). > > > > hmm. I've just done vacuum full at the moment on these tables, but it > > doesnt seem to change anything :( > > Maybe you need a REINDEX, if you have indexes on that table. Try that, > coupled with the frequent VACUUM suggestion. I've tried it, but it doesn't seem to help :( cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
* David Mitchell <david.mitchell@telogis.com> wrote: Hi, > Hmm, you said you don't experience this when executing the query > manually. What adapter are you using to access postgres from your > application? libpq, npgsql or something else? huh, its a delphi application ... (I didnt code it). > And what is your method for running the query 'manually'. Are you > running it locally or from a remote machine or what? using psql remotely - database and client machines are sitting on the same wire. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------
On Mon, Jul 04, 2005 at 10:57:29AM +0200, Enrico Weigelt wrote: > * Alvaro Herrera <alvherre@surnet.cl> wrote: > > On Mon, Jul 04, 2005 at 02:17:47AM +0200, Enrico Weigelt wrote: > > > * David Mitchell <david.mitchell@telogis.com> wrote: > > > > Perhaps if you are doing a lot of inserts and deletes, vacuuming every 6 > > > > minutes would be closer to your mark. Try vacuuming every 15 minutes for > > > > a start and see how that affects things (you will have to do a vacuum > > > > full to get the tables back into shape after them slowing down as they > > > > have). > > > > > > hmm. I've just done vacuum full at the moment on these tables, but it > > > doesnt seem to change anything :( > > > > Maybe you need a REINDEX, if you have indexes on that table. Try that, > > coupled with the frequent VACUUM suggestion. > > I've tried it, but it doesn't seem to help :( So, lets back up a little. You have no table nor index bloat, because you reindexed and full-vacuumed. So where does the slowness come from? Can you post an example EXPLAIN ANALYZE of the queries in question? -- Alvaro Herrera (<alvherre[a]surnet.cl>) "El realista sabe lo que quiere; el idealista quiere lo que sabe" (Anónimo)
On Mon, 4 Jul 2005 10:59:03 +0200, Enrico Weigelt <weigelt@metux.de> wrote: > * David Mitchell <david.mitchell@telogis.com> wrote: > > Hi, > > > Hmm, you said you don't experience this when executing the query > > manually. What adapter are you using to access postgres from your > > application? libpq, npgsql or something else? > > huh, its a delphi application ... (I didnt code it). Turn on statement logging. I've seen delphi interfaces do extra queries on system tables to find some structure information. The available interfaces for delphi that I know of are vitavoom's dbexpress (you should be able to find dbexppge.dll), zeos (you'll have to grep the executable), ODBC using ADO or bde, Or dot net. klint. +---------------------------------------+-----------------+ : Klint Gore : "Non rhyming : : EMail : kg@kgb.une.edu.au : slang - the : : Snail : A.B.R.I. : possibilities : : Mail University of New England : are useless" : : Armidale NSW 2351 Australia : L.J.J. : : Fax : +61 2 6772 5376 : : +---------------------------------------+-----------------+
* Klint Gore <kg@kgb.une.edu.au> wrote: <snip> > Turn on statement logging. I've seen delphi interfaces do extra queries > on system tables to find some structure information. I'm already using statement logging of all queries taking longer than 200ms. It seems that only the INSERT takes such a time. The client is in fact written in delphi, and it sometimes seems to do strange things. For example we had the effect, that some new fields in some table were regularily NULL'ed. None of the triggers and rules inside the DB could do that (since there's no dynamic query stuff) and the delphi application is the only one writing directly to this table. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de --------------------------------------------------------------------- Realtime Forex/Stock Exchange trading powered by postgresSQL :)) http://www.fxignal.net/ ---------------------------------------------------------------------