Re: POC: Cleaning up orphaned files using undo logs

Поиск
Список
Период
Сортировка
От Heikki Linnakangas
Тема Re: POC: Cleaning up orphaned files using undo logs
Дата
Msg-id 7cfdb160-e8d2-1785-5c57-8245774df0b7@iki.fi
обсуждение исходный текст
Ответ на Re: POC: Cleaning up orphaned files using undo logs  (Thomas Munro <thomas.munro@gmail.com>)
Ответы Re: POC: Cleaning up orphaned files using undo logs  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On 05/08/2019 07:23, Thomas Munro wrote:
> On Mon, Aug 5, 2019 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Sun, Aug 4, 2019 at 2:46 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>>> Could we leave out the UNDO and discard worker processes for now?
>>> Execute all UNDO actions immediately at rollback, and after crash
>>> recovery. That would be fine for cleaning up orphaned files,
>>
>> Even if we execute all the undo actions on rollback, we need discard
>> worker to discard undo at regular intervals.  Also, what if we get an
>> error while applying undo actions during rollback?  Right now, we have
>> a mechanism to push such a request to background worker and allow the
>> session to continue.  Instead, we might want to Panic in such cases if
>> we don't want to have background undo workers.
>>
>>> and it
>>> would cut down the size of the patch to review.
>>
>> If we can find some way to handle all cases and everyone agrees to it,
>> that would be good. In fact, we can try to get the basic stuff
>> committed first and then try to get the rest (undo-worker machinery)
>> done.
> 
> I think it's definitely worth exploring.

Yeah. For cleaning up orphaned files, if unlink() fails, we can just log 
the error and move on. That's what we do in the main codepath, too. For 
any other error, PANIC seems ok. We're not expecting any errors during 
undo processing, so it doesn't seems safe to continue running.

Hmm. Since applying the undo record is WAL-logged, you could run out of 
disk space while creating the WAL record. That seems unpleasant.

>>> Can this race condition happen: Transaction A creates a table and an
>>> UNDO record to remember it. The transaction is rolled back, and the file
>>> is removed. Another transaction, B, creates a different table, and
>>> chooses the same relfilenode. It loads the table with data, and commits.
>>> Then the system crashes. After crash recovery, the UNDO record for the
>>> first transaction is applied, and it removes the file that belongs to
>>> the second table, created by transaction B.
>>
>> I don't think such a race exists, but we should verify it once.
>> Basically, once the rollback is complete, we mark the transaction
>> rollback as complete in the transaction header in undo and write a WAL
>> for it.  After crash-recovery, we will skip such a transaction.  Isn't
>> that sufficient to prevent such a race condition?

Ok, I didn't realize there's a flag in the undo record to mark it as 
applied. Yeah, that fixes it. Seems a bit heavy-weight, but I guess it's 
fine. Do you do something different in zheap? I presume writing a WAL 
record for every applied undo record would be too heavy there.

This needs some performance testing. We're creating one extra WAL record 
and one UNDO record for every file creation, and another WAL record on 
abort. It's probably cheap compared to all the other work done during 
table creation, but we should still get some numbers on it.

Some regression tests would be nice too.

- Heikki



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: concerns around pg_lsn
Следующее
От: Ian Barwick
Дата:
Сообщение: Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions