Обсуждение: getting our bearings on "out of memory. failed on request of size..."
Hi All, Our PG server is serving up 50 databases or so (with identical schemas), and the largest one of those has started giving us "out of memory. failed on request of size n" errors in many places. This is a 9 GB database with 100+ tables, the largest of which have 4 million or so rows. We see the Out of Memory errors when trying to run a "hard" SQL statement (SELECTing from a complex view, updating all rows in a large table, etc...) Can anybody point us in a good direction of how to track down just why this is happening? Is there something we should be looking for in the log (and log settings to turn on)? What is a good way to log memory usage of statements? Any help much appreciated in figuring out how to approach problems like these. Thanks, -Tom
Tom Darci <tom@nuws.com> writes: > Our PG server is serving up 50 databases or so (with identical > schemas), and the largest one of those has started giving us "out of > memory. failed on request of size n" errors in many places. This is a > 9 GB database with 100+ tables, the largest of which have 4 million or > so rows. We see the Out of Memory errors when trying to run a "hard" > SQL statement (SELECTing from a complex view, updating all rows in a > large table, etc...) Well, it would be important to know if "n" is little or big, and to see the EXPLAIN output of the problem query. Also, an out-of-memory error will cause a memory usage map to be dumped to postmaster stderr --- that would be pretty useful to see too, if your logging setup captures it. regards, tom lane
Thanks Tom, The "n" has always been small, like 20, or 4 or 32 (these numbers are from memory, not written down), but the one we ran into today was large: 67108864. I will head to the EXPLAIN, as well as to getting the memory dump from the log. Regards, -Tom ------------------------------------------------------------ Tom Darci tom@nuws.com On Oct 28, 2008, at 7:44 AM, Tom Lane wrote: > Tom Darci <tom@nuws.com> writes: >> Our PG server is serving up 50 databases or so (with identical >> schemas), and the largest one of those has started giving us "out of >> memory. failed on request of size n" errors in many places. This is a >> 9 GB database with 100+ tables, the largest of which have 4 million >> or >> so rows. We see the Out of Memory errors when trying to run a "hard" >> SQL statement (SELECTing from a complex view, updating all rows in a >> large table, etc...) > > Well, it would be important to know if "n" is little or big, and to > see > the EXPLAIN output of the problem query. Also, an out-of-memory error > will cause a memory usage map to be dumped to postmaster stderr --- > that would be pretty useful to see too, if your logging setup captures > it. > > regards, tom lane