Re: Excessive memory used for INSERT

Поиск
Список
Период
Сортировка
От Alessandro Ipe
Тема Re: Excessive memory used for INSERT
Дата
Msg-id 3672040.FVo2HxYOvt@snow.oma.be
обсуждение исходный текст
Ответ на Re: Excessive memory used for INSERT  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Excessive memory used for INSERT
Список pgsql-performance
Hi,


My dtrigger definition is
CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE PROCEDURE msg_function();
so it seems that it is a BEFORE trigger.

To be totally honest, I have "really" limited knownledge in SQL and postgresql and all these were gathered from recipes
foundon the web... 


Regards,


Alessandro.


On Wednesday 17 December 2014 10:41:31 Tom Lane wrote:
> Torsten Zuehlsdorff <mailinglists@toco-domains.de> writes:
> > How many rows is "(SELECT * FROM upsert)" returning? Without knowing
> > more i would guess, that the result-set is very big and that could be
> > the reason for the memory usage.
>
> Result sets are not ordinarily accumulated on the server side.
>
> Alessandro didn't show the trigger definition, but my guess is that it's
> an AFTER trigger, which means that a trigger event record is accumulated
> in server memory for each inserted/updated row.  If you're trying to
> update a huge number of rows in one command (or one transaction, if it's
> a DEFERRED trigger) you'll eventually run out of memory for the event
> queue.
>
> An easy workaround is to make it a BEFORE trigger instead.  This isn't
> really nice from a theoretical standpoint; but as long as you make sure
> there are no other BEFORE triggers that might fire after it, it'll work
> well enough.
>
> Alternatively, you might want to reconsider the concept of updating
> hundreds of millions of rows in a single operation ...
>
>             regards, tom lane



В списке pgsql-performance по дате отправления:

Предыдущее
От: Alessandro Ipe
Дата:
Сообщение: Re: Excessive memory used for INSERT
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Excessive memory used for INSERT