Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От Carlo Stonebanks
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id hi6ifm$1bi8$1@news.hub.org
обсуждение исходный текст
Ответ на Re: Massive table (500M rows) update nightmare  (Scott Marlowe <scott.marlowe@gmail.com>)
Ответы Re: Massive table (500M rows) update nightmare  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-performance
> It might well be checkpoints.  Have you tried cranking up checkpoint
> segments to something like 100 or more and seeing how it behaves then?

No I haven't, althugh it certainly make sense - watching the process run,
you get this sense that the system occaisionally pauses to take a deep, long
breath before returning to work frantically ;D

Checkpoint_segments are currently set to 64. The DB is large and is on a
constant state of receiving single-row updates as multiple ETL and
refinement processes run continuously.

Would you expect going to 100 or more to make an appreciable difference, or
should I be more aggressive?
>


В списке pgsql-performance по дате отправления:

Предыдущее
От: "ramasubramanian"
Дата:
Сообщение: Array comparison
Следующее
От: "Carlo Stonebanks"
Дата:
Сообщение: Re: Massive table (500M rows) update nightmare