Обсуждение: resource leak in 7.2
There was a bug in 7.1.2 that was fixed in 7.1.3 that may be back in 7.2. I downloaded 7.2 yesterday morning and started testing it. I left some procedures running overnight performing random updates in a table. Looking at task manager this morning, the three backend processes are each consuming about 1.5e6 handles. I'd report this to the bug system but can't get there from here this morning.
"Tom Pfau" <T.Pfau@emCrit.com> writes: > There was a bug in 7.1.2 that was fixed in 7.1.3 that may be back in > 7.2. > I downloaded 7.2 yesterday morning and started testing it. I left some > procedures running overnight performing random updates in a table. > Looking at task manager this morning, the three backend processes are > each consuming about 1.5e6 handles. I see nothing in the CVS logs to make me think that any file-handle-leakage bug was fixed between 7.1.2 and 7.1.3; and even less to make me think that 7.1.3 contains any fixes that weren't also applied to the 7.2 line. You're going to have to be a great deal more specific. regards, tom lane
We were initially using 7.1.2. Some overnight tests of our application revealed that the backends were not releasing handles according to task manager. We upgraded to 7.1.3 and the problem went away. My initial testing of 7.2 shows the same symptom although I don't know that it's the same cause. I just noted that we have seen the problem before so that if anyone was aware of the prior problem, they might be able to check that it hadn't been lost in the new version. I don't know exactly what got changed between 7.1.2 and 7.1.3 to cause the problem to go away. In any case, if I run my test procedure which just performs updates to a random column of a random row of a 1000 row table, I can watch the handle usage continue to rise on the backend process. Looking a bit more closely now, a single process doesn't seem to cause a problem. Running two or more copies simultaneously causes the backends to continuously consume handles. If I stop one of the processes, the other stops losing handles. I'm attaching the test procedure. BTW, the file it reads for data contains a log of a vacuum session that has been reformatted slightly. The lines are between 3 and 76 bytes. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Tuesday, February 05, 2002 11:21 AM To: Tom Pfau Cc: pgsql-bugs@postgresql.org; pgsql-cygwin@postgresql.org Subject: Re: [BUGS] resource leak in 7.2 "Tom Pfau" <T.Pfau@emCrit.com> writes: > There was a bug in 7.1.2 that was fixed in 7.1.3 that may be back in > 7.2. > I downloaded 7.2 yesterday morning and started testing it. I left some > procedures running overnight performing random updates in a table. > Looking at task manager this morning, the three backend processes are > each consuming about 1.5e6 handles. I see nothing in the CVS logs to make me think that any file-handle-leakage bug was fixed between 7.1.2 and 7.1.3; and even less to make me think that 7.1.3 contains any fixes that weren't also applied to the 7.2 line. You're going to have to be a great deal more specific. regards, tom lane
Вложения
"Tom Pfau" <T.Pfau@emCrit.com> writes: > In any case, if I run my test procedure which just performs updates to a > random column of a random row of a 1000 row table, I can watch the > handle usage continue to rise on the backend process. > Looking a bit more closely now, a single process doesn't seem to cause a > problem. Running two or more copies simultaneously causes the backends > to continuously consume handles. If I stop one of the processes, the > other stops losing handles. Unsurprisingly, I don't see any such problem under Unix (HPUX to be specific, but I'd be astonished to see PG on any Unix do that, considering that each backend manages its open files independently). I think you must be looking at some misbehavior of the Cygwin layer. If the problem really did go away in 7.1.3, perhaps some fix was applied by the Cygwin packager and not contributed back to the main code base. But it's got to be a Cygwin bug anyway; the most we could do is find some workaround to avoid triggering it. regards, tom lane