Re: 64 bit transaction id
От | Tomas Vondra |
---|---|
Тема | Re: 64 bit transaction id |
Дата | |
Msg-id | 20191101171038.6nmbvdiixnhbwe77@development обсуждение исходный текст |
Ответ на | Re: 64 bit transaction id (Pavel Stehule <pavel.stehule@gmail.com>) |
Список | pgsql-hackers |
On Fri, Nov 01, 2019 at 10:25:17AM +0100, Pavel Stehule wrote: >Hi > >pá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин <shnoor111gmail@yandex.ru> >napsal: > >> Hi. >> sorry for my English. >> I want to once again open the topic of 64 bit transaction id. I did not >> manage to find in the archive of the option that I want to discuss, so I >> write. If I searched poorly, then please forgive me. >> The idea is not very original and probably has already been considered, >> again I repeat - I did not find it. Therefore, please do not scold me >> severely. >> In discussions of 64-bit transaction id, I did not find mention of an >> algorithm for storing them, as it was done, for example, in MS SQL Server. >> What if instead of 2 fields (xmin and xmax) with a total length of 64 bits >> - use 1 field (let's call it xid) with a length of 64 bits in tuple header? >> In this field store the xid of the transaction that created the version. In >> this case, the new transaction in order to understand whether the read >> version is suitable for it or not, will have to read the next version as >> well. Those. The downside of such decision is of course an increase in I / >> O. Transactions will have to read the +1 version. On the plus side, the >> title of the tuple remains the same length. >> > >is 32 bit tid really problem? Why you need to know state of last 2^31 >transactions? Is not problem in too low usage (or maybe too high overhead) >of VACUUM FREEZE. > It certainly can be an issue for large and busy systems, that may need anti-wraparoud vacuum every couple of days. If that requires rewriting a couple of TB of data, it's not particularly nice. That's why 64-bit XIDs were discussed repeatedly in the past, and it's likely to get even more pressing as the systems get larger. >I am not sure if increasing this range can has much more fatal problems >(maybe later) > Well, not fatal, but naive approaches can increase per-tuple overhead. And we already have plenty of that, hence there were proposals to use page epochs and so on. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: