Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id 4B46E8A7020000250002E012@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Ответы Re: Massive table (500M rows) update nightmare  ("Carlo Stonebanks" <stonec.register@sympatico.ca>)
Список pgsql-performance
"Carlo Stonebanks" <stonec.register@sympatico.ca> wrote:

> Already done in an earlier post

Perhaps I misunderstood; I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update.  If that was the case, it might be worthwhile to look at
the entire flow of an iteration.

Also, if you ever responded with version and configuration
information, I missed it.  The solution to parts of what you
describe would be different in different versions.  In particular,
you might be able to solve checkpoint-related lockup issues and then
improve performance by using bigger batches.  Right now I would be
guessing at what might work for you.

-Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Eduardo Morras
Дата:
Сообщение: Re: Massive table (500M rows) update nightmare
Следующее
От: Rui Carvalho
Дата:
Сообщение: Re: Array comparison