Обсуждение: CachedPlan logs until full disk
Hello guys, it is the second time (in two weeks), that have a very strange Postgresql in a 8.4.22 installation (32 bit still). Logfile grow up (in few hours) until filling the Whole disk space. I can read infinite series of this messages: CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used I had to stop, delete logs, and then restart again and the problems solved. But i did not understand why this problem occurred. Thank you for any help! Francesco
Job <Job@colliniconsulting.it> writes: > it is the second time (in two weeks), that have a very strange Postgresql in a 8.4.22 installation (32 bit still). You realize, of course, that 8.4.x has been out of support for a couple of years now. > Logfile grow up (in few hours) until filling the Whole disk space. > I can read infinite series of this messages: > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used > CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used > SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used > CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used > CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used > SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used This appears to be a fragment of a memory map that would be produced in conjunction with an "out of memory" error. It's difficult to say much more than that with only this much information, but clearly you need to do something to prevent recurrent out-of-memory errors. If looking at the map as a whole makes it clear that it's zillions of CachedPlans that are chewing up most of the memory, then I would guess that they are getting leaked as a result of constantly replacing plpgsql functions --- does your application do a lot of CREATE OR REPLACE FUNCTION commands? I don't think plpgsql coped with that very well before 9.1. regards, tom lane
Dear Tom, thank you for the reply. Tonight this problem happened again: >> CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used >> CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used >> SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used >> CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used >> CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used >> SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used >> CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used >This appears to be a fragment of a memory map that would be produced >in conjunction with an "out of memory" error. It's difficult to say >much more than that with only this much information, but clearly you >need to do something to prevent recurrent out-of-memory errors. We were automatically bulkling a table that archive system logging. We used a pg_bulk, but not a create or replace function. That process often use lots of memory. Just one question: do you think is it possible to disable that logging sentence? Thank you! Francesco ________________________________________ Da: Tom Lane [tgl@sss.pgh.pa.us] Inviato: venerdì 4 novembre 2016 21.24 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] CachedPlan logs until full disk Job <Job@colliniconsulting.it> writes: > it is the second time (in two weeks), that have a very strange Postgresql in a 8.4.22 installation (32 bit still). You realize, of course, that 8.4.x has been out of support for a couple of years now. > Logfile grow up (in few hours) until filling the Whole disk space. > I can read infinite series of this messages: > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used > CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used > SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used > CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used > CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used > SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used This appears to be a fragment of a memory map that would be produced in conjunction with an "out of memory" error. It's difficult to say much more than that with only this much information, but clearly you need to do something to prevent recurrent out-of-memory errors. If looking at the map as a whole makes it clear that it's zillions of CachedPlans that are chewing up most of the memory, then I would guess that they are getting leaked as a result of constantly replacing plpgsql functions --- does your application do a lot of CREATE OR REPLACE FUNCTION commands? I don't think plpgsql coped with that very well before 9.1. regards, tom lane
Job <Job@colliniconsulting.it> writes: > Tonight this problem happened again: > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used > CachedPlanSource: 1024 total in 1 blocks; 336 free (0 chunks); 688 used > SPI Plan: 1024 total in 1 blocks; 912 free (0 chunks); 112 used > CachedPlan: 1024 total in 1 blocks; 200 free (0 chunks); 824 used > CachedPlanSource: 1024 total in 1 blocks; 96 free (0 chunks); 928 used > SPI Plan: 1024 total in 1 blocks; 928 free (0 chunks); 96 used > CachedPlan: 1024 total in 1 blocks; 640 free (0 chunks); 384 used > Just one question: do you think is it possible to disable that logging sentence? The logging printout is not your problem; or at least, it's entirely unhelpful to regard it that way. Your problem is the out-of-memory situation it's reporting on. As I said before, you need to investigate what behavior of your application is causing that and take steps to mitigate it. regards, tom lane