Обсуждение: backend dies suddenly after a lot of error messages
Hi, We have problems with backend processes that close the channel because of palloc() failures. When an INSERT statement fails, the backend reports an error (e.g. `Cannot insert a duplicate key into a unique index') and allocates a few bytes more memory. The next SQL statement that fails causes the backend to allocate more memory again, etc. until we have no more virtual memory left. Is this a bug? We are using postgres 6.4.2 on FreeBSD 2.2.8. It also works with psql: toy=> create table mytable (i integer unique); NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for table mytable CREATE toy=> \q ~ $ # now do a lot of inserts that cause error messages: ~ $ while true; do echo "INSERT INTO mytable VALUES (1);"; done | psql toy INSERT INTO mytable VALUES (1); ERROR: Cannot insert a duplicate key into a unique index ...quite a lot of these messages INSERT INTO mytable VALUES (1); ERROR: Cannot insert a duplicate key into a unique index INSERT INTO mytable VALUES (1); pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. We have lost the connection to the backend, so further processing is impossible. Terminating. Hmm, why does the backend allocate more and more memory with each failed INSERT ? Any clues? Thanks, Mirko
A bug report on this was filled out against the 6.3 release as well. Don't know the status of it, however :( Mirko Kaffka wrote: > > Hi, > > We have problems with backend processes that close the channel because of > palloc() failures. When an INSERT statement fails, the backend reports an > error (e.g. `Cannot insert a duplicate key into a unique index') and > allocates a few bytes more memory. The next SQL statement that fails > causes the backend to allocate more memory again, etc. until we have no > more virtual memory left. Is this a bug? > We are using postgres 6.4.2 on FreeBSD 2.2.8. > > It also works with psql: > > toy=> create table mytable (i integer unique); > NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for > table mytable > CREATE > toy=> \q > > ~ $ # now do a lot of inserts that cause error messages: > ~ $ while true; do echo "INSERT INTO mytable VALUES (1);"; done | psql toy > INSERT INTO mytable VALUES (1); > ERROR: Cannot insert a duplicate key into a unique index > ...quite a lot of these messages > INSERT INTO mytable VALUES (1); > ERROR: Cannot insert a duplicate key into a unique index > INSERT INTO mytable VALUES (1); > > pqReadData() -- backend closed the channel unexpectedly. > This probably means the backend terminated abnormally before or > while processing the request. > We have lost the connection to the backend, so further processing is > impossible. Terminating. > > Hmm, why does the backend allocate more and more memory with each failed > INSERT ? > Any clues? > > Thanks, > Mirko -- ------------------------------------------------------------ Thomas Reinke Tel: (416) 460-7021 Director of Technology Fax: (416) 598-2319 E-Soft Inc. http://www.e-softinc.com
Mirko Kaffka <mirko@interface-business.de> writes: > We have problems with backend processes that close the channel because of > palloc() failures. When an INSERT statement fails, the backend reports an > error (e.g. `Cannot insert a duplicate key into a unique index') and > allocates a few bytes more memory. The next SQL statement that fails > causes the backend to allocate more memory again, etc. until we have no > more virtual memory left. Is this a bug? Yeah, I'd say so --- all the memory used should get freed at transaction end, but evidently it isn't happening. > We are using postgres 6.4.2 on FreeBSD 2.2.8. I still see it with 6.5-current sources. Will take a look. regards, tom lane
Re: [HACKERS] backend dies suddenly after a lot of error messages
От
jwieck@debis.com (Jan Wieck)
Дата:
> > Mirko Kaffka <mirko@interface-business.de> writes: > > We have problems with backend processes that close the channel because of > > palloc() failures. When an INSERT statement fails, the backend reports an > > error (e.g. `Cannot insert a duplicate key into a unique index') and > > allocates a few bytes more memory. The next SQL statement that fails > > causes the backend to allocate more memory again, etc. until we have no > > more virtual memory left. Is this a bug? > > Yeah, I'd say so --- all the memory used should get freed at transaction > end, but evidently it isn't happening. > > > We are using postgres 6.4.2 on FreeBSD 2.2.8. > > I still see it with 6.5-current sources. Will take a look. I remember to have taken some but haven't found all the places. I think there's still something in tcop where the querytree list is malloc()'d. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #======================================== jwieck@debis.com (Jan Wieck) #
Mirko Kaffka <mirko@interface-business.de> wrote: > We have problems with backend processes that close the channel because of > palloc() failures. When an INSERT statement fails, the backend reports an > error (e.g. `Cannot insert a duplicate key into a unique index') and > allocates a few bytes more memory. The next SQL statement that fails > causes the backend to allocate more memory again, etc. until we have no > more virtual memory left. Is this a bug? > We are using postgres 6.4.2 on FreeBSD 2.2.8. I have found the primary cause of memory leakage after an error --- basically, the backend forgets to free *any* of the temporary memory allocated up to the point of the error :-(. If your applications tend to provoke many SQL errors then you will see a backend process that eats up more and more memory until it hits your local system's process size limit, whereupon it crashes. I have repaired this problem in the 6.5 development sources. Attached is a patch for 6.4.2, which I suggest you apply if this sounds like a problem you are having. The patch does not completely eliminate memory leaks after errors, but they seem to be reduced to the few-hundred-bytes-per-error range instead of the kilobytes (potentially lots of kilobytes) range. I am working on curing the problem more completely for 6.5. regards, tom lane *** src/backend/access/transam/xact.c.orig Thu Oct 8 14:29:15 1998 --- src/backend/access/transam/xact.c Thu May 13 18:53:05 1999 *************** *** 767,776 **** static void AtAbort_Memory() { /* ---------------- ! * after doing an abort transaction, make certain the ! * system uses the top memory context rather then the ! * portal memory context (until the next transaction). * ---------------- */ MemoryContextSwitchTo(TopMemoryContext); --- 767,791 ---- static void AtAbort_Memory() { + Portal portal; + MemoryContext portalContext; + /* ---------------- ! * Release memory in the blank portal. ! * Since EndPortalAllocMode implicitly works on the current context, ! * first make real sure that the blank portal is the selected context. ! * (This is ESSENTIAL in case we aborted from someplace where it wasn't.) ! * ---------------- ! */ ! portal = GetPortalByName(NULL); ! portalContext = (MemoryContext) PortalGetHeapMemory(portal); ! MemoryContextSwitchTo(portalContext); ! EndPortalAllocMode(); ! ! /* ---------------- ! * Now that we're "out" of a transaction, have the ! * system allocate things in the top memory context instead ! * of the blank portal memory context. * ---------------- */ MemoryContextSwitchTo(TopMemoryContext);
> > We have problems with backend processes that close the channel because of > > palloc() failures. When an INSERT statement fails, the backend reports an > > error (e.g. `Cannot insert a duplicate key into a unique index') and > > allocates a few bytes more memory. The next SQL statement that fails > > causes the backend to allocate more memory again, etc. until we have no > > more virtual memory left. Is this a bug? > > We are using postgres 6.4.2 on FreeBSD 2.2.8. > > > > It also works with psql: > > > > toy=> create table mytable (i integer unique); > > NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for > > table mytable > > CREATE > > toy=> \q > > > > ~ $ # now do a lot of inserts that cause error messages: > > ~ $ while true; do echo "INSERT INTO mytable VALUES (1);"; done | psql toy > > INSERT INTO mytable VALUES (1); > > ERROR: Cannot insert a duplicate key into a unique index > > ...quite a lot of these messages > > INSERT INTO mytable VALUES (1); > > ERROR: Cannot insert a duplicate key into a unique index > > INSERT INTO mytable VALUES (1); > > > > pqReadData() -- backend closed the channel unexpectedly. > > This probably means the backend terminated abnormally before or > > while processing the request. > > We have lost the connection to the backend, so further processing is > > impossible. Terminating. > > > > Hmm, why does the backend allocate more and more memory with each failed > > INSERT ? > > Any clues? There was a bug in pre-6.5 versions that caused elog failure not to release their memory. There is still a small leak for elogs, but it is only a few bytes. You should find this is fixed in 6.5. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026