Re: processing large amount of rows with plpgsql

Поиск
Список
Период
Сортировка
От Geert Mak
Тема Re: processing large amount of rows with plpgsql
Дата
Msg-id 156BB7A4-1F36-4023-A885-77BDB2AF2699@verysmall.org
обсуждение исходный текст
Ответ на Re: processing large amount of rows with plpgsql  (Merlin Moncure <mmoncure@gmail.com>)
Ответы Re: processing large amount of rows with plpgsql  ("Marc Mamin" <M.Mamin@intershop.de>)
Список pgsql-general
On 08.08.2012, at 22:04, Merlin Moncure wrote:

> What is the general structure of the procedure?  In particular, how
> are you browsing and updating the rows?

Here it is -

BEGIN
for statistics_row in SELECT * FROM statistics ORDER BY time ASC
LOOP
    ...
    ... here some very minimal transformation is done
    ... and the row is written into the second table
    ...
END LOOP;
RETURN 1;
END;

> There is (almost) no way to
> force commit inside a function --

So what you are saying is that this behavior is normal and we should either equip ourselves with enough disk space
(whichI am trying now, it is a cloud server, which I am resizing to gain more disk space and see what will happen) or
doit with an external (scripting) language? 

> there has been some discussion about
> stored procedure and/or autonomous transaction feature in terms of
> getting there.
>
> I say 'almost' because you can emulate some aspects of autonomous
> transactions with dblink, but that may not be a very good fit for your
> particular case.

I met already dblink mention in this context somewhere... Though if plpgsql performs well with more disk space, I'll
leaveit for now. It is a one time operation this one. 

Thank you,
Geert

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Problem running "ALTER TABLE...", ALTER TABLE waiting
Следующее
От: "Marc Mamin"
Дата:
Сообщение: Re: processing large amount of rows with plpgsql