Re: snapshot too old issues, first around wraparound and then more.

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: snapshot too old issues, first around wraparound and then more.
Дата
Msg-id 20210616161144.GK20766@tamriel.snowman.net
обсуждение исходный текст
Ответ на Re: snapshot too old issues, first around wraparound and then more.  (Greg Stark <stark@mit.edu>)
Ответы Re: snapshot too old issues, first around wraparound and then more.  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Greetings,

* Greg Stark (stark@mit.edu) wrote:
> I think Andres's point earlier is the one that stands out the most for me:
>
> > I still think that's the most reasonable course. I actually like the
> > feature, but I don't think a better implementation of it would share
> > much if any of the current infrastructure.
>
> That makes me wonder whether ripping the code out early in the v15
> cycle wouldn't be a better choice. It would make it easier for someone
> to start work on a new implementation.
>
> There is the risk that the code would still be out and no new
> implementation would have appeared by the release of v15 but it sounds
> like that's people are leaning towards ripping it out at that point
> anyways.
>
> Fwiw I too think the basic idea of the feature is actually awesome.
> There are tons of use cases where you might have one long-lived
> transaction working on a dedicated table (or even a schema) that will
> never look at the rapidly mutating tables in another schema and never
> trigger the error even though those tables have been vacuumed many
> times over during its run-time.

I've long felt that the appropriate approach to addressing that is to
improve on VACUUM and find a way to do better than just having the
conditional of 'xmax < global min' drive if we can mark a given tuple as
no longer visible to anyone.

Not sure that all of the snapshot-too-old use cases could be solved that
way, nor am I even sure it's actually possible to make VACUUM smarter in
that way without introducing other problems or having to track much more
information than we do today, but it'd sure be nice if we could address
the use-case you outline above while also not introducing query
failures if that transaction does happen to decide to go look at some
other table (naturally, the tuples which are in that rapidly mutating
table that *would* be visible to the long-running transaction would have
to be kept around to make things work, but if it's rapidly mutating then
there's very likely lots of tuples that the long-running transaction
can't see in it, and which nothing else can either, and therefore could
be vacuumed).

Thanks,

Stephen

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: Transactions involving multiple postgres foreign servers, take 2
Следующее
От: Jacob Champion
Дата:
Сообщение: Re: Support for NSS as a libpq TLS backend