Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> Wong, Yi Wen wrote:
>> lazy_record_dead_tuple may fail to record the heap for later pruning
>> for lazy_vacuum_heap if there is already a sufficiently large number of dead tuples
>> in the array:
> Hmm, ouch, good catch.
> AFAICS this is a shouldn't-happen condition, since we bail out of the
> loop pessimistically as soon as we would be over the array limit if the
> next page were to be full of dead tuples (i.e., we never give the chance
> for overflow to actually happen). So unless I misunderstand, this could
> only fail if you give maint_work_mem smaller than necessary for one
> pageful of dead tuples, which should be about 1800 bytes ...
> If we wanted to be very sure about this we could add a test and perhaps
> abort the vacuum, but I'm not sure it's worth the trouble.
I think if we're going to depend on that, we should change the logic from
"don't record tuple if no space" to "throw error on no space".
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs