Wong, Yi Wen wrote:
> There is actually another case where HEAPTUPLE_DEAD tuples may be kept and have
> prepare_freeze skipped on them entirely.
>
> lazy_record_dead_tuple may fail to record the heap for later pruning
> for lazy_vacuum_heap if there is already a sufficiently large number of dead tuples
> in the array:
Hmm, ouch, good catch.
AFAICS this is a shouldn't-happen condition, since we bail out of the
loop pessimistically as soon as we would be over the array limit if the
next page were to be full of dead tuples (i.e., we never give the chance
for overflow to actually happen). So unless I misunderstand, this could
only fail if you give maint_work_mem smaller than necessary for one
pageful of dead tuples, which should be about 1800 bytes ...
If we wanted to be very sure about this we could add a test and perhaps
abort the vacuum, but I'm not sure it's worth the trouble.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs