Re: Free space management within heap page

Поиск
Список
Период
Сортировка
От Pavan Deolasee
Тема Re: Free space management within heap page
Дата
Msg-id 2e78013d0701240648s6a21af71v3457695432b94654@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Free space management within heap page  (Gregory Stark <stark@enterprisedb.com>)
Список pgsql-hackers

On 1/24/07, Gregory Stark <stark@enterprisedb.com> wrote:
"Pavan Deolasee" <pavan.deolasee@gmail.com> writes:

> On 1/24/07, Martijn van Oosterhout <kleptog@svana.org > wrote:
>>
>> I thought the classical example was a transaction that updated the same
>> tuple multiple times before committing. Then the version prior to the
>> transaction start isn't dead yet, but all but one of the versions
>> created by the transaction will be dead (they were never visible by
>> anybody else anyway).
>
> I believe that calculation of oldestXmin would consider the running
> transaction, if any, which can still see the original tuple. So the
> intermediate tuples won't be declared DEAD (they will be declared
> RECENTLY_DEAD) as long as the other transaction is running. Any newer
> transactions would always see the committed copy and hence need not follow
> ctid through the dead tuples.

Martijn is correct that HeapTupleSatisfiesVacuum considers tuples dead if
there were created and deleted by the same transaction even if that
transaction isn't past the oldestxmin horizon.

I agree. Here the tuple must had been created as an effect of INSERT and not
UPDATE. Since if its created because of UPDATE, then HEAP_UPDATED bit
is set on the tuple and tuple is not considered dead by HeapTupleSatisfiesVacuum,
even if its xmin and xmax are same. So it must have been created by INSERT. In
that case there can not be a parent linking this tuple via t_ctid.
 
There's already been one bug in that area when it broke update chains, and to
fix it vacuum ignores tuples that were deleted by the same transaction in an
UPDATE statement.

Sounds logical.
 
This seems like such an unusual case, especially now that it's been narrowed
by that exception, that it's silly to optimize for it. Just treat these tuples
as live and they'll be vacuumed when their transaction commits and passes the
oldestxmin like normal.


I agree. Nevertheless, I don't see any problem with having that optimization.

Now that I think more about it, there are places where xmin of the next tuple
in the t_ctid chain is matched with the xmax of the previous tuple to detect cases
where one of the intermediate DEAD tuples has been vacuumed away and the slot
has been reused by a completely unrelated tuple. So doesn't than mean we have
already made provision for scenarios where intermediate DEAD tuples are vacuumed
away ?

Thanks,
Pavan


EnterpriseDB     http://www.enterprisedb.com

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: Default permissisons from schemas
Следующее
От: "Merlin Moncure"
Дата:
Сообщение: Re: Default permissisons from schemas