Re: Add 64-bit XIDs into PostgreSQL 15

Поиск
Список
Период
Сортировка
От Chris Travers
Тема Re: Add 64-bit XIDs into PostgreSQL 15
Дата
Msg-id CAEq-hvvw-vt++nMPnXPnqwbdiPwTpge_z1JYpZPPxVg63Y+-oA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Add 64-bit XIDs into PostgreSQL 15  (Aleksander Alekseev <aleksander@timescale.com>)
Ответы Re: Add 64-bit XIDs into PostgreSQL 15  (Aleksander Alekseev <aleksander@timescale.com>)
Список pgsql-hackers


On Tue, Nov 22, 2022 at 10:01 AM Aleksander Alekseev <aleksander@timescale.com> wrote:
Hi Chris,

> Right now the way things work is:
> 1.  Database starts throwing warnings that xid wraparound is approaching
> 2.  Database-owning team initiates an emergency response, may take downtime or degradation of services as a result
> 3.  People get frustrated with PostgreSQL because this is a reliability problem.
>
> What I am worried about is:
> 1.  Database is running out of space
> 2.  Database-owning team initiates an emergency response and takes more downtime to into a good spot
> 3.  People get frustrated with PostgreSQL because this is a reliability problem.
>
> If that's the way we go, I don't think we've solved that much.  And as humans we also bias our judgments towards newsworthy events, so rarer, more severe problems are a larger perceived problem than the more routine, less severe problems.  So I think our image as a reliable database would suffer.
>
> An ideal resolution from my perspective would be:
> 1.  Database starts throwing warnings that xid lag has reached severely abnormal levels
> 2.  Database owning team initiates an effort to correct this, and does not take downtime or degradation of services as a result
> 3.  People do not get frustrated because this is not a reliability problem anymore.
>
> Now, 64-big xids are necessary to get us there but they are not sufficient.  One needs to fix the way we handle this sort of problem.  There is existing logic to warn if we are approaching xid wraparound.  This should be changed to check how many xids we have used rather than remaining and have a sensible default there (optionally configurable).
>
> I agree it is not vacuum's responsibility.  It is the responsibility of the current warnings we have to avoid more serious problems arising from this change.  These should just be adjusted rather than dropped.

I disagree with the axiom that XID wraparound is merely a symptom and
not a problem.

XID wraparound doesn't happen to healthy databases, nor does it happen to databases actively monitoring this possibility.  The cases where it happens, two circumstances are present:

1.  Autovacuum is stalled, and
2.  Monitoring is not checking for xid lag (which would be fixed by autovacuum if it were running properly).

XID wraparound is downstream of those problems.   At least that is my experience.  If you disagree, I would like to hear why.

Additionally those problems still will cause worse outages with this change unless there are some mitigating measures in place.  If you don't like my proposal, I would be open to other mitigating measures.  But I think there need to be mitigating measures in a change like this.

Using 32-bit XIDs was a reasonable design decision back when disk
space was limited and disks were slow. The drawback of this approach
is the need to do the wraparound but agaig back then it was a
reasonable design choice. If XIDs were 64-bit from the beginning users
could run one billion (1,000,000,000) TPS for 584 years without a
wraparound. We wouldn't have it similarly as there is no wraparound
for WAL segments. Now when disks are much faster and much cheaper
32-bit XIDs are almost certainly not a good design choice anymore.
(Especially considering the fact that this particular patch mitigates
the problem of increased disk consumption greatly.)

I agree that 64-bit xids are a good idea.  I just don't think that existing safety measures should be ignored or reverted. 

Also I disagree with an argument that a DBA that doesn't monitor disk
space would care much about some strange warnings in the logs. If a
DBA doesn't monitor basic system metrics I'm afraid we can't help this
person much.

The problem isn't just the lack of disk space, but the difficulty that stuck autovacuum runs pose in resolving the issue.  Keep in mind that everything you can do to reclaim disk space (vacuum full, cluster, pg_repack) will be significantly slowed down by an extremely bloated table/index comparison.  The problem is that if you are running out of disk space, and your time to recovery much longer than expected, then you have a major problem.  It's not just one or the other, but the combination that poses the real risk here.

Now that's fine if you want to run a bloatless table engine but to my knowledge none of these are production-ready yet.  ZHeap seems mostly stalled.  Oriole is still experimental.  But with the current PostgreSQL table structure.

A DBA can monitor disk space, but if the DBA is not also monitoring xid lag, then by the time corrective action is taken it may be too late.


I do agree that we could probably provide some additional help for the
rest of the users when it comes to configuring VACUUM. This is indeed
non-trivial. However I don't think this is in scope of this particular
patchset. I suggest we keep the focus in this discussion. If you have
a concrete proposal please consider starting a new thread.

This at least is my personal opinion. Let's give the rest of the
community a chance to share their thoughts.

Fair enough.   As I say, my proposal that this needs mitigating measures here comes from my experience with xid wraparound and vacuum runs that took 36hrs+ to run.    At present my objection stands, and I hope the committers take that into account.

--
Best regards,
Aleksander Alekseevh

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andrey Chudnovsky
Дата:
Сообщение: Re: [PoC] Federated Authn/z with OAUTHBEARER
Следующее
От: David Rowley
Дата:
Сообщение: Re: Hash index build performance tweak from sorting