Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От marcin mank
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id b1b9fac61001071305vf182f3ajff6827f92c943c68@mail.gmail.com
обсуждение исходный текст
Ответ на Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Ответы Re: Massive table (500M rows) update nightmare
Список pgsql-performance
> every update is a UPDATE ... WHERE id
>>= x AND id < x+10 and a commit is performed after every 1000 updates
> statement, i.e. every 10000 rows.

What is the rationale behind this? How about doing 10k rows in 1
update, and committing every time?

You could try making the condition on the ctid column, to not have to
use the index on ID, and process the rows in physical order. First
make sure that newly inserted production data has the correct value in
the new column, and add 'where new_column is null' to the conditions.
But I have never tried this, use at Your own risk.

Greetings
Marcin Mank

В списке pgsql-performance по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: noob inheritance question
Следующее
От: "Carlo Stonebanks"
Дата:
Сообщение: Re: Massive table (500M rows) update nightmare