Обсуждение: Transaction eating up all RAM
I have stored proc that retrieves a bunch of data, stores it in temp table, computes all sorts of totals/averages/whatnots from the temp table, and inserts results in another table. It works fine (except I don't like wrapping all SQL statements in 'execute'), but multiple calls to that proc from another procedure causes excessive memory usage (upwards of 400M), and server eventually runs out of swap space. I believe this is because PG caches transactions in RAM, and this particular one is a bit too big.
Is there any way to make PG write transaction to disk instead of caching it? Alternatively I would be happy to have full control over transactions inside stored procedure - like not wrapping proc call inside implicit transaction, and doing my own 'BEGIN' and 'COMMIT' inside the proc... but of course PG won't suppport it
Right now only way to make this work is to build SQL script with hundreds of calls to the stored proc in question, and run it directly.
Any ideas?
Peter
"Peter" <peter@greatnowhere.com> writes: > I have stored proc that retrieves a bunch of data, stores it in temp = > table, computes all sorts of totals/averages/whatnots from the temp = > table, and inserts results in another table. It works fine (except I = > don't like wrapping all SQL statements in 'execute'), but multiple calls = > to that proc from another procedure causes excessive memory usage = > (upwards of 400M), and server eventually runs out of swap space. I = > believe this is because PG caches transactions in RAM, and this = > particular one is a bit too big.=20 Your belief is incorrect. You could be looking at a memory-leak bug. Or, if there are foreign keys involving the tables, you could be looking at the list of pending foreign key trigger events getting too large. There's not enough information here to say. regards, tom lane
>> I have stored proc that retrieves a bunch of data, stores it in temp = >> table, computes all sorts of totals/averages/whatnots from the temp = >> table, and inserts results in another table. It works fine (except I = >> don't like wrapping all SQL statements in 'execute'), but multiple calls >> = >> to that proc from another procedure causes excessive memory usage = >> (upwards of 400M), and server eventually runs out of swap space. I = >> believe this is because PG caches transactions in RAM, and this = >> particular one is a bit too big.=20 > > Your belief is incorrect. > > You could be looking at a memory-leak bug. Or, if there are foreign > keys involving the tables, you could be looking at the list of pending > foreign key trigger events getting too large. There's not enough > information here to say. I have no triggers defined on any of the tables, and no foreign keys that could cause cascaded updates and stuff. Care to see full text of the proc? It's pl/PgPerlU Peter
> >> I have stored proc that retrieves a bunch of data, stores it in temp = > >> table, computes all sorts of totals/averages/whatnots from the temp = > >> table, and inserts results in another table. It works fine (except I = > >> don't like wrapping all SQL statements in 'execute'), but multiple calls > >> = > >> to that proc from another procedure causes excessive memory usage = > >> (upwards of 400M), and server eventually runs out of swap space. I = > >> believe this is because PG caches transactions in RAM, and this = > >> particular one is a bit too big.=20 is that multiple simultaneous calls? maybe you are over committing your sort memory. If you can reproduce the out of memory behavior from a single backend that argues for a memory leak. p.s. you can create one function temp_tables_init(), called after connection to backend (and not in a transaction) which creates all temp tables for the process. If you do that and remember to truncate the tables (not drop), you can use non-dynamic pl/pgsql calls. Merlin
"Peter" <peter@greatnowhere.com> writes: > I have no triggers defined on any of the tables, and no foreign keys that > could cause cascaded updates and stuff. Care to see full text of the proc? > It's pl/PgPerlU If there's no triggers involved then it sounds like a memory leak. What PG version is this? If it's current then we'd like a test case, yes. regards, tom lane
> >> I have stored proc that retrieves a bunch of data, stores it in temp = > >> table, computes all sorts of totals/averages/whatnots from the temp = > >> table, and inserts results in another table. It works fine (except I = > >> don't like wrapping all SQL statements in 'execute'), but multiple > >> calls > >> = > >> to that proc from another procedure causes excessive memory usage = > >> (upwards of 400M), and server eventually runs out of swap space. I = > >> believe this is because PG caches transactions in RAM, and this = > >> particular one is a bit too big.=20 > is that multiple simultaneous calls? maybe you are over committing > your sort memory. If you can reproduce the out of memory behavior > from a single backend that argues for a memory leak. Single backend. > p.s. you can create one function temp_tables_init(), called after > connection to backend (and not in a transaction) which creates all > temp tables for the process. If you do that and remember to truncate > the tables (not drop), you can use non-dynamic pl/pgsql calls. That's an interesting option... would make our PHP frontend a bit more complex thou. Obviously performance would be better in this case as query plans will be pre-compiled. Peter