Re: multixacts woes

Поиск
Список
Период
Сортировка
От andres@anarazel.de (Andres Freund)
Тема Re: multixacts woes
Дата
Msg-id 20150508182713.GT12950@alap3.anarazel.de
обсуждение исходный текст
Ответ на multixacts woes  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: multixacts woes  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Hi,

On 2015-05-08 14:15:44 -0400, Robert Haas wrote:
> Apparently, we have been hanging our hat since the release of 9.3.0 on
> the theory that the average multixact won't ever have more than two
> members, and therefore the members SLRU won't overwrite itself and
> corrupt data.

It's essentially a much older problem - it has essentially existed since
multixacts were introduced (8.1?). The consequences of it were much
lower before 9.3 though.

> 3. It seems to me that there is a danger that some users could see
> extremely frequent anti-mxid-member-wraparound vacuums as a result of
> this work.  Granted, that beats data corruption or errors, but it
> could still be pretty bad.

It's certainly possible to have workloads triggering that, but I think
it's relatively uncommon.  I in most cases I've checked the multixact
consumption rate is much lower than the xid consumption. There are some
exceptions, but often that's pretty bad code.

> At that
> point, I think it's possible that relminmxid advancement might start
> to force full-table scans more often than would be required for
> relfrozenxid advancement.  If so, that may be a problem for some
> users.

I think it's the best we can do right now.

> What can we do about this?  Alvaro proposed back-porting his fix for
> bug #8470, which avoids locking a row if a parent subtransaction
> already has the same lock.  Alvaro tells me (via chat) that on some
> workloads this can dramatically reduce multixact size, which is
> certainly appealing.  But the fix looks fairly invasive - it changes
> the return value of HeapTupleSatisfiesUpdate in certain cases, for
> example - and I'm not sure it's been thoroughly code-reviewed by
> anyone, so I'm a little nervous about the idea of back-porting it at
> this point.  I am inclined to think it would be better to release the
> fixes we have - after handling items 1 and 2 - and then come back to
> this issue.  Another thing to consider here is that if the high rate
> of multixact consumption is organic rather than induced by lots of
> subtransactions of the same parent locking the same tuple, this fix
> won't help.

I'm not inclined to backport it at this stage. Maybe if we get some
field reports about too many anti-wraparound vacuums due to this, *and*
the code has been tested in 9.5.

> Another thought that occurs to me is that if we had a freeze map, it
> would radically decrease the severity of this problem, because
> freezing would become vastly cheaper.  I wonder if we ought to try to
> get that into 9.5, even if it means holding up 9.5

I think that's not realistic. Doing this right isn't easy. And doing it
wrong can lead to quite bad results, i.e. data corruption. Doing it
under the pressure of delaying a release further and further seems like
recipe for disaster.

> Quite aside from multixacts, repeated wraparound autovacuuming of
> static data is a progressively more serious problem as data set sizes
> and transaction volumes increase.

Yes. Agreed.

> The possibility that multixact freezing may in some
> scenarios exacerbate that problem is just icing on the cake.  The
> fundamental problem is that a 32-bit address space just isn't that big
> on modern hardware, and the problem is worse for multixact members
> than it is for multixact IDs, because a given multixact only uses
> consumes one multixact ID, but as many slots in the multixact member
> space as it has members.

FWIW, I intend to either work on this myself, or help whoever seriously
tackles this, in the next cycle.

Greetings,

Andres Freund



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: Custom/Foreign-Join-APIs (Re: [v9.5] Custom Plan API)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: INSERT ... ON CONFLICT UPDATE/IGNORE 4.0