Обсуждение: Change checkpoint‑record‑missing PANIC to FATAL

Поиск
Список
Период
Сортировка

Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
Hi,

While working on [1], we discussed whether the redo-record-missing error should be a PANIC or a FATAL. We concluded that FATAL is more appropriate, as it is more appropriate for the current situation and achieves the intended behavior and also it is consistent with the backup_label path, which already reports FATAL in the same scenario.

However, when the checkpoint record is missing, the behavior remains inconsistent: Without a backup_label, we currently raise a PANIC. With a backup_label, the same code path reports a FATAL.Since we have already made the redo‑record‑missing case to FATAL in 15f68ce, it seems reasonable to align the checkpoint‑record‑missing case as well. The existing PANIC dates back to an era before online backups and archive recovery existed, when external manipulation of WAL was not expected and such conditions were treated as internal faults. With all such features, it is much more realistic for WAL segments to go missing due to operational issues, and such cases are often recoverable. So switching this to FATAL appears appropriate.

Please share your thoughts.

I am happy to share a patch including a TAP test to cover this behavior once we agree to proceed.

[1]: https://www.postgresql.org/message-id/flat/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m%3DdaGqiOuVdizYWYaA%40mail.gmail.com

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Tue, Dec 16, 2025 at 04:25:37PM +0530, Nitin Jadhav wrote:
> it seems reasonable to align the checkpoint‑record‑missing case as well.
> The existing PANIC dates back to an era before online backups and archive
> recovery existed, when external manipulation of WAL was not expected and
> such conditions were treated as internal faults. With all such features, it
> is much more realistic for WAL segments to go missing due to operational
> issues, and such cases are often recoverable. So switching this to FATAL
> appears appropriate.
>
> Please share your thoughts.

FWIW, I think that we should lift the PANIC pattern in this case, at
least to be able to provide more tests around the manipulation of WAL
segments when triggering recovery, with or without a backup_label as
much as with or without a recovery/standby.signal defined in the tree.
The PANIC pattern to blow up the backend when missing a checkpoint
record at the beginning of recovery is a historical artifact of
4d14fe0048cf.  The backend has evolved a lot since, particularly with
WAL archives that came much later than that.  Lowering that to a FATAL
does not imply a loss of information, just the lack of a backtrace
that can be triggered depending on how one has set of a cluster to
start (say a recovery.signal was forgotten and pg_wal/ has no
contents, etc.).  And IMO I doubt that a trace is really useful anyway
in this specific code path.

I'd love to hear the opinion of others on the matter, so if anybody
has comments, feel free.

I'd be curious to look at the amount of tests related to recovery
startup you have in mind anyway, Nitin.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
> I'd be curious to look at the amount of tests related to recovery
> startup you have in mind anyway, Nitin.

Apologies for the delay.
At a high level, the recovery startup cases we want to test fall into
two main buckets:
(1) with a backup_label file and (2) without a backup_label file.

From these two situations, we can cover the following scenarios:
1) Primary crash recovery without a backup_label – Delete the WAL
segment containing the checkpoint record and try starting the server.
2) Primary crash recovery with a backup_label – Take a base backup
(which creates the backup_label), remove the checkpoint WAL segment,
and start the server with that backup directory.
3) Standby crash recovery – Stop the standby, delete the checkpoint
WAL segment, and start it again to see how standby recovery behaves.
4) PITR / archive‑recovery – Remove the checkpoint WAL segment and
start the server with a valid restore_command so it enters archive
recovery.

Tests (2) and (4) are fairly similar, so we can merge them if they
turn out to be redundant.
These are the scenarios I have in mind so far. Please let me know if
you think anything else should be added.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft



Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Mon, Dec 29, 2025 at 08:39:08PM +0530, Nitin Jadhav wrote:
> Apologies for the delay.
> At a high level, the recovery startup cases we want to test fall into
> two main buckets:
> (1) with a backup_label file and (2) without a backup_label file.

For clarity's sake, we are talking about lowering this one in
xlogrecovery.c, which relates to the code path where these is no
backup_label file:
ereport(PANIC,
        errmsg("could not locate a valid checkpoint record at %X/%08X",
               LSN_FORMAT_ARGS(CheckPointLoc)));

> From these two situations, we can cover the following scenarios:
> 1) Primary crash recovery without a backup_label – Delete the WAL
> segment containing the checkpoint record and try starting the server.

Yeah, let's add a test for that.  It would be enough to remove the
segment that includes the checkpoint record.  There should be no need
to be fancy with injection points like the other test case from
15f68cebdcec.

> 2) Primary crash recovery with a backup_label – Take a base backup
> (which creates the backup_label), remove the checkpoint WAL segment,
> and start the server with that backup directory.

Okay.  I don't mind something here, for the two FATAL cases in the
code path where the backup_label exists:
- REDO record missing with checkpoint record found.  This is similar
to 15f68cebdcec.
- Checkpoint record missing.
Both should be cross-checked with the FATAL errors generated in the
server logs.

> 3) Standby crash recovery – Stop the standby, delete the checkpoint
> WAL segment, and start it again to see how standby recovery behaves.

In this case, we need to have a restore_command set anyway, no,
meaning that we should never fail?  I don't recall that we have a test
for that, currently, where we could look at the server logs to check
that a segment has been retrieved because the segment that includes
the checkpoint record is missing..

> 4) PITR / archive‑recovery – Remove the checkpoint WAL segment and
> start the server with a valid restore_command so it enters archive
> recovery.

Same as 3) to me, standby mode cannot be activated without a
restore_command and the recovery GUC checks are done in accordance to
the signal files before we attempt to read the initial checkpoint
record.

> Tests (2) and (4) are fairly similar, so we can merge them if they
> turn out to be redundant.
> These are the scenarios I have in mind so far. Please let me know if
> you think anything else should be added.

For the sake of the change from the PANIC to FATAL mentioned at the
top of this message, (1) would be enough.

The two cases of (2) I'm mentioning would be nice bonuses.  I would
recommend to double-check first if we trigger these errors in some
tests of the existing tests, actually, perhaps we don't need to add
anything except a check in some node's logs for the error string
patterns wanted.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
Hi Michael,

Thanks for the detailed feedback.

> For clarity's sake, we are talking about lowering this one in
> xlogrecovery.c, which relates to the code path where these is no
> backup_label file:
> ereport(PANIC,
>         errmsg("could not locate a valid checkpoint record at %X/%08X",
>                LSN_FORMAT_ARGS(CheckPointLoc)));

I agree that case (1) is sufficient for the purpose of this change. I
mentioned the scenarios where a backup_label file exists mainly to
consider additional coverage in this area, but I agree those would
only be bonuses, as you note later.

> For the sake of the change from the PANIC to FATAL mentioned at the
> top of this message, (1) would be enough.
>
> The two cases of (2) I'm mentioning would be nice bonuses.  I would
> recommend to double-check first if we trigger these errors in some
> tests of the existing tests, actually, perhaps we don't need to add
> anything except a check in some node's logs for the error string
> patterns wanted.

I agree with your assessment. Case (1) is enough for this change, and
the cases in (2) would be nice bonuses. I’m fine with dropping cases
(3), (4) for now.

I had a quick look at the existing recovery TAP tests and didn’t
immediately find a case where simply adding log checks would cover
these error paths, but I’ll double‑check once more before sending the
patch. I’ll work on this and share the patch soon.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft



Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Thu, Feb 19, 2026 at 08:24:02AM +0530, Nitin Jadhav wrote:
> I had a quick look at the existing recovery TAP tests and didn’t
> immediately find a case where simply adding log checks would cover
> these error paths, but I’ll double‑check once more before sending the
> patch. I’ll work on this and share the patch soon.

Thanks, Nitin.  Perhaps it would be a better approach to split the
patch into multiple pieces, with the most relevant PANIC->FATAL
switches and the most critical tests on top of the rest.  It would be
nice to get most of that by the end of the release cycle, or a rather
"good" chunk of it.
--
Michael

Вложения