Re: transaction control in pl/pgsql

Поиск
Список
Период
Сортировка
От Birgit Laggner
Тема Re: transaction control in pl/pgsql
Дата
Msg-id 4BB493AA.7040508@vti.bund.de
обсуждение исходный текст
Ответ на Re: transaction control in pl/pgsql  (Alban Hertroys <dalroi@solfertje.student.utwente.nl>)
Ответы Re: transaction control in pl/pgsql
Список pgsql-general
Hi Alban,

thanks for your detailed answer!

My database settings:
max connections: 20
work_mem: 100MB
shared buffers: 12000MB

Server memory:
physical 32GB
total memory 50GB (incl. swap)
shared memory ??

I am not sure if I use deferred constraints - the only constraints I use
in the function are NOT NULL constraints. But I don't believe my memory
overflow had something to do with them, because the error occured during
a loop. There should not happen any sudden changes regarding any NOT
NULL constraints between one loop cycle and the next.

My function was the only connection to the database as far as I know -
so, I don't think allocation of memory should be a reason...

I would say: Yes, my function seems to store large amounts of data in
memory. But in my function, I tried to store as much as possible of the
interim results in real tables (not temp tables) instead of storing them
in variables. But my guess is that postgres doesn't write the tables and
therefore keeps everything in memory. (by the way: the swap was used up,
too)

It's really difficult to post only a part of the function, just because
every next step is based on the result of the previous step. I also
guess that every step on its own wouldn't cause memory overflow, but
that it's more like the memory use adds up with every step. But I will
try and cut the function into little snippets and let them run one for
one - perhaps the memory overflows still occurs for one snippet...

I you have any ideas ...

Thanks again and regards,

Birgit.




On 01.04.2010 13:27, Alban Hertroys wrote:
> On 1 Apr 2010, at 12:22, Birgit Laggner wrote:
>
>
>> Dear list,
>>
>> I have some data (big size) and I've written a long function in pl/pgsql
>> which processes the data in several steps. At a test run my function
>> aborted because of memory exhaustion. My guess is, that everything what
>> happens during the function transaction is stored in the memory - until
>> it's full...
>>
> No, transactions operate on the database just like anything else. Running out of memory doing that is not impossible,
butyou'd need to create some really big transactions to reach those limits (I think I've been quoted the limit being
2^32instructions per transaction a looong time ago, just to give you an indication). 
>
> You're probably running out of memory for another reason. Now to figure out why...
>
> I've seen an issue with deferred constraints causing this, for example. Do you use deferred constraints?
>
> Another cause that pops up regularly is that people specify too much global work_mem for postgres. work_mem gets
allocatedper connection, so the more connections you have the more memory goes to work_mem and other resources may
receivetoo little, or you cross what's available. 
> What are your relevant postgres settings (max connections, work_mem, shared mem, etc.) and how much physical, total
andshared memory does your server have? 
>
> Another possibility is that your function stores large amounts of data in variables that are not backed up by
databasetables. That means all that data will be stored in memory, and even if it goes to swap at some point (not sure
itwill, it would seriously hurt performance) there is a limit. 
> If this is the case, maybe you could use temporary tables to process that data instead of trying to do it all in
memory.
>
>
>> So, my idea for solving that problem would be to cut the
>> big function into smaller functions. But, I don't want to write 30
>> function calls at the end - I would rather like to have one function
>> which is calling all these small functions, so I would only have to
>> write one sql-query at the end.
>>
> Splitting up big functions into smaller functions is always a good idea. That's part of general programming
paradigms.It won't cut down the size of your transaction though. 
>
>
>> What I fear is either, that, if this
>> function calls the other functions, everything is only one trancaction
>> again and I get memory overflow once more.
>>
>
> If the problem persists, maybe you could post your function somewhere. As it's apparently a rather long function, can
youstrip it down to something that still causes it to run out of memory but that will be a bit easier for the people on
thislist to wade through? 
>
> Alban Hertroys
>
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
>
>
> !DSPAM:1054,4bb4832810417514219450!
>
>
>
>

В списке pgsql-general по дате отправления:

Предыдущее
От: Thom Brown
Дата:
Сообщение: Re: PgCluster
Следующее
От: Michael Gould
Дата:
Сообщение: Performance statistics