On 3/28/2011 12:35 PM, Jan Wieck wrote:
> On 3/27/2011 10:43 PM, Tom Lane wrote:
>
>> In particular, I thought the direction Jan was headed was to release and
>> reacquire the lock between truncating off limited-size chunks of the
>> file. If we do that, we probably *don't* want or need to allow autovac
>> to be booted off the lock more quickly.
>
> That is correct.
>
>>> 3) Scanning backwards 8MB at a time scanning each 8MB forwards instead
>>> of just going back by block backwards.
>>
>> Maybe. I'd want to see some experimental evidence justifying the choice
>> of chunk size; I'm pretty sure this will become counterproductive once
>> the chunk size is too large.
>
> Me too, which is why that part of my proposal is highly questionable and
> requires a lot of evidence to be even remotely considered for back releases.
Attached is a patch against HEAD that implements the part that truncates
the heap in small batches (512 pages at a time) without fiddling with
the scan direction.
It does several retries when attempting to get the exclusive lock. This
is because when doing it this way I discovered that locks queued up
behind the exclusive lock held by autovacuum make it too likely that it
fails after just a few batches.
I am going to see what a similar logic will do to 8.4, where the
exclusive lock has far more severe consequences to client connections.
Jan
--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin