2008/6/26 Tom Lane <tgl@sss.pgh.pa.us>:
> "jay" <jackem.mojx@alibaba-inc.com> writes:
>> I know the problem, because there are about 35 million rows , which
>> cost about 12G disk space and checkpoint segments use 64, but update
>> operation is in one transaction which lead fast fill up the checkpoint
>> segments and lead do checkpoints frequently, but checkpoints will cost lots
>> resources, so update operation become slowly and slowly and bgwrite won't
>> write because it's not commit yet.
>> Create a new table maybe a quick solution, but it's not appropriated in some
>> cases.
>> If we can do commit very 1000 row per round, it may resolve the
>> problem.
>
> No, that's utterly unrelated. Transaction boundaries have nothing to do
> with checkpoints.
True. But if you update 10000 rows and vacuum you can keep the bloat
to something reasonable.
On another note, I haven't seen anyone suggest adding the appropriate
where clause to keep from updating rows that already match. Cheap
compared to updating the whole table even if a large chunk aren't a
match. i.e.
... set col=0 where col <>0;
That should be the first thing you reach for in this situation, if it can help.