Re: 8.3.0 Core with concurrent vacuum fulls

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: 8.3.0 Core with concurrent vacuum fulls
Дата
Msg-id 25084.1204730017@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: 8.3.0 Core with concurrent vacuum fulls  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: 8.3.0 Core with concurrent vacuum fulls  ("Gavin M. Roy" <gmr@myyearbook.com>)
Re: 8.3.0 Core with concurrent vacuum fulls  ("Heikki Linnakangas" <heikki@enterprisedb.com>)
Список pgsql-hackers
I wrote:
> In particular, if that's the problem, why has this not been seen before?
> The fact that it's going through heap_page_prune doesn't seem very
> relevant --- VACUUM FULL has certainly always had to invoke
> CacheInvalidateHeapTuple someplace or other.  So I still want to see
> the deadlock report ... we at least need to know which tables are
> involved in the deadlock.

Actually, maybe it *has* been seen before.  Gavin, are you in the habit
of running concurrent VACUUM FULLs on system catalogs, and if so have
you noted that they occasionally get deadlock failures?

> A separate line of thought is whether it's a good idea that
> heap_page_prune calls the inval code inside a critical section.
> That's what's turning an ordinary deadlock failure into a PANIC.
> Even without the possibility of having to do cache initialization,
> that's a lot of code to be invoking, and it has obvious failure
> modes (eg, out of memory for the new inval list item).

The more I think about this the more I don't like it.  The critical
section in heap_page_prune is *way* too big.  Aside from the inval
call, there are HeapTupleSatisfiesVacuum() calls, which could have
failures during attempted clog references.

The reason the critical section is so large is that we're manipulating
the contents of a shared buffer, and we don't want a failure to leave a
partially-modified page in the buffer.  We could fix that if we were to
memcpy the page into local storage and do all the pruning work there.
Then the critical section would only surround copying the page back to
the buffer and writing the WAL record.  Copying the page is a tad
annoying but heap_page_prune is an expensive operation anyway, and
I think we really are at too much risk of PANIC the way it's being done
now.  Has anyone got a better idea?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: 8.3.0 Core with concurrent vacuum fulls
Следующее
От: "Gavin M. Roy"
Дата:
Сообщение: Re: 8.3.0 Core with concurrent vacuum fulls