Обсуждение: Resetting recovery target parameters in pg_createsubscriber
Hi Hackers,
I noticed that pg_createsubscriber sets recovery target params for
correct recovery before converting a physical replica to a logical
one but does not reset them afterward. It can lead to recovery
failures in certain scenarios.
For example, if recovery begins from a checkpoint where no WAL records
need to be applied, the system might incorrectly determine that the
recovery target was never reached because these parameters remain
active.
I’ve attached a TAP test to reproduce the issue.
The proposed patch ensures all recovery parameters are reset after
conversion to prevent such edge cases.
I would appreciate any feedback.
I noticed that pg_createsubscriber sets recovery target params for
correct recovery before converting a physical replica to a logical
one but does not reset them afterward. It can lead to recovery
failures in certain scenarios.
For example, if recovery begins from a checkpoint where no WAL records
need to be applied, the system might incorrectly determine that the
recovery target was never reached because these parameters remain
active.
I’ve attached a TAP test to reproduce the issue.
The proposed patch ensures all recovery parameters are reset after
conversion to prevent such edge cases.
I would appreciate any feedback.
--
Regards,
Alyona Vinter
Regards,
Alyona Vinter
Вложения
Dear Laaren, > I noticed that pg_createsubscriber sets recovery target params for > correct recovery before converting a physical replica to a logical > one but does not reset them afterward. It can lead to recovery > failures in certain scenarios. > For example, if recovery begins from a checkpoint where no WAL records > need to be applied, the system might incorrectly determine that the > recovery target was never reached because these parameters remain > active. Thanks for reporting. I have known that parameters won't be overwritten, but I didn't recognize the case that recovery fails. > I’ve attached a TAP test to reproduce the issue. > The proposed patch ensures all recovery parameters are reset after > conversion to prevent such edge cases. WriteRecoveryConfig() has been used to setup the recovery parameters. Can we follow the way to restore them? Also, can we add a test to 040_pg_createsubscriber? IIUC it is enough to check one of recovery parameter is reset after the conversion. Best regards, Hayato Kuroda FUJITSU LIMITED
On Mon, Sep 01, 2025 at 02:06:34AM +0000, Hayato Kuroda (Fujitsu) wrote: > WriteRecoveryConfig() has been used to setup the recovery parameters. Can we > follow the way to restore them? > > Also, can we add a test to 040_pg_createsubscriber? IIUC it is enough to check > one of recovery parameter is reset after the conversion. Yeah, we'd want some tests to check the behaviors and expectations in this tool. This tool is complex enough that this is going to be mandatory, and making a test cheaper is always nicer. FWIW, I find the proposed patch a bit dangerous. It updates pg_createsubscriber.c so as an ALTER SYSTEM is used to reset the parameters, but the recovery parameters are updated via WriteRecoveryConfig() which is the code path holding the knowledge that postgresql.auto.conf is used to hold the recovery parameters. I don't like much the fact this creates a duplication with setup_recovery() for the list of parameters handled. All the recovery parameters are forced to a hardcoded value, except recovery_target_lsn. So perhaps it would be better to maintain in pg_createsubscriber.c a list made of (GUC names, values), with the LSN part handled as an exception for the value to assign. GenerateRecoveryConfig() can work with a replication connection, relying on ALTER SYSTEM would not be able to do that properly, so perhaps we should just invent a new routine that resets a portion of the file on disk because recovery_gen.c's code already assumes that it has write access to a data folder to do its work? -- Michael
Вложения
Dear Michael and Hayato,
Thank you both for your valuable feedback on the previous patch version.
Thank you both for your valuable feedback on the previous patch version.
I've reworked the patch based on your suggestions - the new version should address the concerns about ALTER SYSTEM and follows the same patterns as the 'setup_recovery' code.
I kept primary_conninfo as-is for now since I'm not totally sure if we need to touch it
I look forward to your feedback! ;)
Best regards,
Alyona Vinter
Вложения
Dear Alyona,
Thanks for updating the patch!
Sadly, your patch cannot be applied cleanly. Even after the manual merge, it could not
be built. Maybe `dbinfo` should be `dbinfos.dbinfo`. Obtained message is written in [1].
(cfbot seemed not to run correctly)
Regarding patch content, your patch restores the postgresql.auto.conf after the
command runs. Initially I felt that it is enough to set below GUCs becasue only
they are changed from the default. Is there a reason why you fully restore them?
```
recovery_target_inclusive true
recovery_target_action pause
recovery_target_lsn ""
```
[1]
```
../postgres/src/bin/pg_basebackup/pg_createsubscriber.c: In function ‘main’:
../postgres/src/bin/pg_basebackup/pg_createsubscriber.c:2526:31: error: ‘dbinfo’ undeclared (first use in this
function);did you mean ‘dbinfos’?
2526 | reset_recovery_params(dbinfo, subscriber_dir);
| ^~~~~~
| dbinfos
```
Best regards,
Hayato Kuroda
FUJITSU LIMITED
Dear Hayato,
Thank you for the review! My apologies for the error in the patch -- it looks like I accidentally modified it before sending =(. I've attached the fixed versions below.
> Regarding patch content, your patch restores the postgresql.auto.conf after the
> command runs. Initially I felt that it is enough to set below GUCs becasue only> they are changed from the default. Is there a reason why you fully restore them?
I just found it easier to restore the original state of 'postgresql.auto.conf', as removing parameters from the file resets them to their default values. This approach achieves the same final state without having to explicitly set each one.
Вложения
Hi,
CFbot indicated some issues with the patch. I've attached rebased versions of the patches, so hopefully everything will be ok this time.
Best regards,
Alyona Vinter
On Fri, 5 Sept 2025 at 12:51, Alyona Vinter <dlaaren8@gmail.com> wrote:
Dear Hayato,Thank you for the review! My apologies for the error in the patch -- it looks like I accidentally modified it before sending =(. I've attached the fixed versions below.> Regarding patch content, your patch restores the postgresql.auto.conf after the> command runs. Initially I felt that it is enough to set below GUCs becasue only
> they are changed from the default. Is there a reason why you fully restore them?I just found it easier to restore the original state of 'postgresql.auto.conf', as removing parameters from the file resets them to their default values. This approach achieves the same final state without having to explicitly set each one.
Вложения
Sorry, wrong patches again. Here are the correct ones.
Best regards,
Alyona Vinter
Вложения
Hello, Alyona! On Mon, Sep 8, 2025 at 8:35 AM Alyona Vinter <dlaaren8@gmail.com> wrote: > > Sorry, wrong patches again. Here are the correct ones. I went though this patches. 1) I've removed the array of parameters. I see it was proposed by Michael upthread. But I think his proposal came from the fact we walk trough the parameters twice. But we end up walking trough the parameter once in setup_recovery(), while reset_recovery_params() just restores the previous contents. I think it makes sense to keep the changes minimal. 2) I reordered patches so that helper function goes first. I think it essential to order commit in the way that every commit leaves our tree in working state. 3) I make pgpreltidy run over 040_pg_createsubscriber.pl. Any thought? ------ Regards, Alexander Korotkov Supabase
Вложения
On Mon, Sep 15, 2025 at 10:29:47AM +0300, Alexander Korotkov wrote:
> I went though this patches.
> 1) I've removed the array of parameters. I see it was proposed by
> Michael upthread. But I think his proposal came from the fact we walk
> trough the parameters twice. But we end up walking trough the
> parameter once in setup_recovery(), while reset_recovery_params() just
> restores the previous contents. I think it makes sense to keep the
> changes minimal.
Yeah, my concern was about the duplication of the list. As long as a
fix does not do any of that, I'm OK. Sorry if my idea of a list of
parameters felt misguided if we make recovery_gen.c smarter with the
handling of the on-disk files.
> 2) I reordered patches so that helper function goes first. I think it
> essential to order commit in the way that every commit leaves our tree
> in working state.
Yep. That would create some noise if one bisects for example. These
are always annoying because they make analysis of a range of commits
longer with more false positives. If you have a large range of
commits, the odds are usually very low, but who knows..
> 3) I make pgpreltidy run over 040_pg_createsubscriber.pl.
> Any thought?
GetRecoveryConfig() and ReplaceRecoveryConfig() should have some
documentation, regarding what the callers of these functions can
expect from them.
+ use_recovery_conf =
+ PQserverVersion(pgconn) < MINIMUM_VERSION_FOR_RECOVERY_GUC;
+
+ snprintf(tmp_filename, MAXPGPATH, "%s/%s.tmp", target_dir,
+ use_recovery_conf ? "recovery.conf" : "postgresql.auto.conf");
+
+ snprintf(filename, MAXPGPATH, "%s/%s", target_dir,
+ use_recovery_conf ? "recovery.conf" : "postgresql.auto.conf"
No need for use_recovery_conf. You could just set a pointer to the
file name instead and avoid the duplication.
+ cf = fopen(tmp_filename, "w");
+ if (cf == NULL)
+ pg_fatal("could not open file \"%s\": %m", tmp_filename);
"a" is used in fopen() when calling WriteRecoveryConfig() when under
use_recovery_conf. Perhaps this inconsistency should be specified as
a comment because we are generating a temporary file from scratch with
the new recovery GUC contents?
This patch also means that pg_createsubscriber is written so as the
contents added to recovery.conf/postgresql.auto.conf by
setup_recovery() are never reset if there is a failure in-flight. Is
that OK or should we also have an exit callback to do the cleanup work
in such cases?
Perhaps these internal manipulations should be documented as well, to
make the users of this tool aware of steps they may need to take in
the event of an in-flight failure? pg_createsubscriber includes a
"How it works" section that explains how the tool works, including the
part about the recovery parameters. The changes of this patch become
implied facts, and are not reflected in the docs. That sounds like a
problem to me because we are hiding some of the the internal logic,
but the docs are written so as they explain all these details.
--
Michael
Вложения
Hi Michael and Alexander,
Thank you both for your help! I really appreciate it.
As a newcomer here, I might make some mistakes, but I hope with your guidance I can avoid making them in the future =)
> pg_createsubscriber includes a
> "How it works" section that explains how the tool works, including the
Thank you both for your help! I really appreciate it.
As a newcomer here, I might make some mistakes, but I hope with your guidance I can avoid making them in the future =)
> Yeah, my concern was about the duplication of the list. As long as a
> fix does not do any of that, I'm OK. Sorry if my idea of a list of
> parameters felt misguided if we make recovery_gen.c smarter with the
> handling of the on-disk files.
> fix does not do any of that, I'm OK. Sorry if my idea of a list of
> parameters felt misguided if we make recovery_gen.c smarter with the
> handling of the on-disk files.
I got your concern about avoiding duplication. I thought that defining all parameters explicitly in the file header would lead to clearer and nicer code, which is why I left it that way (even without duplicating). But now I agree with Alexander's point about keeping the changes minimal.
> This patch also means that pg_createsubscriber is written so as the
> contents added to recovery.conf/postgresql.auto.conf by
> setup_recovery() are never reset if there is a failure in-flight. Is
> that OK or should we also have an exit callback to do the cleanup work
> in such cases?
> contents added to recovery.conf/postgresql.auto.conf by
> setup_recovery() are never reset if there is a failure in-flight. Is
> that OK or should we also have an exit callback to do the cleanup work
> in such cases?
It's a good idea to add an exit callback. Additionally, I'd like to propose adding a pre-flight check at the start. This check would look for any existing recovery configuration that might be an artifact from a previous aborted run and warn the user or handle it appropriately. What do you think about implementing both the exit callback and the pre-flight check?
> "How it works" section that explains how the tool works, including the
> part about the recovery parameters.
I looked through the `pg_createsubscriber.c` file but wasn't able to locate a "How it works" section. Could you please point me to the specific file or line number you are referring to? Or do you mean all the descriptive comments? For context, I'm currently working on the version where my patch is being tested in CI.
I will work on improving the code and will also add the documentation notes that Michael has pointed out ASAP.
Best regards,
Alyona Vinter
I will work on improving the code and will also add the documentation notes that Michael has pointed out ASAP.
Best regards,
Alyona Vinter
On Tue, Sep 16, 2025 at 05:27:43PM +0700, Alyona Vinter wrote: >> This patch also means that pg_createsubscriber is written so as the >> contents added to recovery.conf/postgresql.auto.conf by >> setup_recovery() are never reset if there is a failure in-flight. Is >> that OK or should we also have an exit callback to do the cleanup work >> in such cases? > > It's a good idea to add an exit callback. Additionally, I'd like to propose > adding a pre-flight check at the start. This check would look for any > existing recovery configuration that might be an artifact from a previous > aborted run and warn the user or handle it appropriately. What do you think > about implementing both the exit callback and the pre-flight check? I am not sure how much a pre-flight check would help if we have an exit callback that would make sure that things are cleaned up on exit. Is there any need to worry about a kill(9) that would cause the exit cleanup callback to not be called? We don't bother about that usually, so I don't see a strong case for it here, either. :) Alexander may have a different opinion. > I will work on improving the code and will also add the documentation notes > that Michael has pointed out ASAP. Thanks. -- Michael
Вложения
Hi,
I'm back with improvements :)
I've added code comments in `recovery_gen.c` and expanded the documentation in `pg_createsubscriber.sgml`.
About the recovery parameters cleanup: I thought about adding an exit callback, but it doesn't really make sense because once the target server gets promoted (which happens soon after we set the parameters), there's no point in cleaning up - the server is already promoted and can't be used as a replica again and must be recreated. Also, `reset_recovery_params()` might call `exit()` itself, which could cause problems with the cleanup callback.
So I think it's better to just warn users about leftover parameters and let them handle the cleanup manually if needed.
By the way, is it ok that the second patch includes both code and test changes together, or should I split them into separate commits?
I look forward to your feedback!
I'm back with improvements :)
I've added code comments in `recovery_gen.c` and expanded the documentation in `pg_createsubscriber.sgml`.
About the recovery parameters cleanup: I thought about adding an exit callback, but it doesn't really make sense because once the target server gets promoted (which happens soon after we set the parameters), there's no point in cleaning up - the server is already promoted and can't be used as a replica again and must be recreated. Also, `reset_recovery_params()` might call `exit()` itself, which could cause problems with the cleanup callback.
So I think it's better to just warn users about leftover parameters and let them handle the cleanup manually if needed.
By the way, is it ok that the second patch includes both code and test changes together, or should I split them into separate commits?
I look forward to your feedback!
Regards,
Alena Vinter
Alena Vinter
Вложения
On Tue, Sep 23, 2025 at 12:04:04PM +0700, Alena Vinter wrote: > About the recovery parameters cleanup: I thought about adding an exit > callback, but it doesn't really make sense because once the target server > gets promoted (which happens soon after we set the parameters), there's no > point in cleaning up - the server is already promoted and can't be used as > a replica again and must be recreated. Also, `reset_recovery_params()` > might call `exit()` itself, which could cause problems with the cleanup > callback. Your argument does not consider one case, which is very common: pg_rewind. Even if the standby finishes recovery and is promoted with its new recovery parameters, we could rewind it rather than recreate a new standby from scratch. That's cheaper than recreating a new physical replica from scratch. Keeping the recovery parameters added by pg_createsubscriber around would make pg_rewind's work more complicated, because it does similar manipulations, for different requirements. The tipping point where we would not be able to reuse the promoted standby happens as the last step of pg_createsuscriber in modify_subscriber_sysid() where its system ID is changed. Before that, the code also makes an effort of cleaning up anything that's been created in-betwee. Even the system ID argument is not entirely true, actually. One could also decide to switch the system ID back to what it was previously to match with the primary. That requires a bit more magic, but that's not impossible. > So I think it's better to just warn users about leftover parameters and let > them handle the cleanup manually if needed. Warnings tend to be ignored and missed, especially these days where vendors automate these actions. It is true that there could be an argument about requiring extra implementation steps on each vendor side, but they would also need to keep up with any new GUCs that pg_createsubscriber may add in the future when setting up its recovery parameters, which would mean extra work for everybody, increasing the range of problems for some logic that's isolated to pg_createsubscriber. In short, I disagree with what you are doing here: we should take the extra step and clean up anything that's been created by the tool when we know we can safely do so (aka adding a static flag that the existing cleanup callback should rely on, which is already what your patch 0003 does to show a warning). > By the way, is it ok that the second patch includes both code and test > changes together, or should I split them into separate commits? The tests and the fix touch entirely separate code paths, keeping them together is no big deal. -- Michael
Вложения
Hi,
> we know we can safely do so
> In short, I disagree with what you are doing here: we should take the
> extra step and clean up anything that's been created by the tool when> we know we can safely do so
I got your point, thanks for pointing to the `pg_rewind` case. I've attached a new version of the patches. I've changed `ReplaceRecoveryConfig` a little bit -- now it returns false in case of an error instead of exiting.
Best wishes,
Alena Vinter
Вложения
On Mon, Sep 29, 2025 at 04:57:09PM +0700, Alena Vinter wrote:
> I got your point, thanks for pointing to the `pg_rewind` case. I've
> attached a new version of the patches. I've changed `ReplaceRecoveryConfig`
> a little bit -- now it returns false in case of an error instead of exiting.
#include "common/logging.h"
+#include "common/file_utils.h"
Incorrect include file ordering.
+GetRecoveryConfig(PGconn *pgconn, const char *target_dir)
[...]
+ char data[1024];
[...]
+ while ((bytes_read = fread(data, 1, sizeof(data), cf)) > 0)
+ {
+ data[bytes_read] = '\0';
+ appendPQExpBufferStr(contents, data);
+ }
You are assuming that this will never overflow. However, recovery
parameters could include commands, which are mostly limited to
MAXPGPATH, itself 1024. So that's unsafe. The in-core routine
pg_get_line(), or the rest in pg_get_line.c is safer to use, relying
on malloc() for the frontend and the lines fetched.
+ pg_log_warning_hint("Manual removal of recovery parameters is required from 'postgresql.auto.conf'
(PostgreSQL%d+) or 'recovery.conf' (older versions)", + MINIMUM_VERSION_FOR_RECOVERY_GUC
/10000);
Hmm, okay here. You would need that hint anyway if you cannot connect
to determine to which file the recovery parameters need to go to, the
other code paths failures in ReplaceRecoveryConfig() would include the
file name, which offers a sufficient hint about the version, but a
connect_database() failure does not.
+static bool recovery_params_set = false;
+static bool recovery_params_reset = false;
Hmm. We may need an explanation about these, in the shape of a
comment, to document what's expected from them. Rather than two
booleans, using an enum tracking the state of the parameters would be
cleaner? And actually, you do not need two flags. Why not just
switch recovery_params_set to false once ReplaceRecoveryConfig() is
called?
+reset_recovery_params(const struct LogicalRepInfo *dbinfo, const char
*datadir)
[...]
+ recoveryconfcontents = GenerateRecoveryConfig(conn, NULL, NULL);
Why do we need to call again GenerateRecoveryConfig() when resetting
recovery.conf/postgresql.conf.sample with its original contents before
switching the system ID of the new replica? I may be missing
something, of course, but we're done with recovery so I don't quite
see the point in appending the recovery config generated with the
original contents. If this is justified (don't think it is), this
deserves a comment to explain the reason behind this logic.
--
Michael
Вложения
HI Michael,
Thank you for the review!
> Why not just
> switch recovery_params_set to false once ReplaceRecoveryConfig() is
> called?
> switch recovery_params_set to false once ReplaceRecoveryConfig() is
> called?
Stupid me!
> Why do we need to call again GenerateRecoveryConfig() when resetting
> recovery.conf/postgresql.conf.sample with its original contents before
> switching the system ID of the new replica? I may be missing
> something, of course, but we're done with recovery so I don't quite
> see the point in appending the recovery config generated with the
> original contents. If this is justified (don't think it is), this
> deserves a comment to explain the reason behind this logic.
This relates to the point I mentioned earlier about being unsure whether we should preserve `primary_conninfo`:
> I kept primary_conninfo as-is for now since I'm not totally sure if we need to touch it
The reason I called `GenerateRecoveryConfig()` was to regenerate the `primary_conninfo` string in the recovery configuration file. If we should remove it, then the reset function can be much simpler. Could you please help me to clarify should we regenerate `primary_conninfo` or we can safely purge it?
Best regards,
Alena Vinter
On Tue, Sep 30, 2025 at 12:22:08PM +0700, Alena Vinter wrote: > This relates to the point I mentioned earlier about being unsure whether we > should preserve `primary_conninfo`: > > I kept primary_conninfo as-is for now since I'm not totally sure if we > need to touch it > > The reason I called `GenerateRecoveryConfig()` was to regenerate the > `primary_conninfo` string in the recovery configuration file. If we should > remove it, then the reset function can be much simpler. Could you please > help me to clarify should we regenerate `primary_conninfo` or we can safely > purge it? Based on the contents of the latest patch, we reset the parameters after promoting the node, and primary_conninfo only matters while we are in recovery, for a standby recovery WAL using the streaming replication protocol. -- Michael
Вложения
Dear Alena,
Thanks for updating the patch. Few comments.
```
+ /* Before setting up the recovery parameters save the original content. */
+ savedrecoveryconfcontents = GetRecoveryConfig(conn, datadir);
```
To confirm, you put the connection to the primary/publisher instead of standby/subscriber.
But it is harmless because streaming replication requires that both instances
have the same major version. Is it correct?
```
+ pg_log_warning_hint("Manual removal of recovery parameters is required from 'postgresql.auto.conf'
(PostgreSQL%d+) or 'recovery.conf' (older versions)",
+ MINIMUM_VERSION_FOR_RECOVERY_GUC / 10000);
```
Can we cache the version info when we firstly connect to the primary node to
print appropriate filename? Or is it hacky?
```
+ if (dry_run)
+ {
+ appendPQExpBufferStr(savedrecoveryconfcontents, "# dry run mode");
+ }
```
Per my understanding, setup_recovery() puts the indicator becasue the content
can be printed. I think it is not needed since reset_recovery_params() does not
have that, or we can even print the parameters.
```
+sub test_param_absent
+{
+ my ($node, $param) = @_;
+ my $auto_conf = $node->data_dir . '/postgresql.auto.conf';
+
+ return 1 unless -e $auto_conf;
+
+ my $content = slurp_file($auto_conf);
+ return $content !~ /^\s*$param\s*=/m;
+}
```
Can you add a short comment atop the function? Something like:
"Check whether the given parameter is specified in postgresql.auto.conf"
Best regards,
Hayato Kuroda
FUJITSU LIMITED
Hello Alena!
I am new in reviewing here and tried to review your patch:
I agree with the review of Michael considering
+char data[1024];
This looks unsafe.
+static bool recovery_params_set = false;
+static bool recovery_params_reset = false;
Using two booleans here looks wrong to me.
Maybe one is enough with refactored logic in
cleanup_objects_atexit()?
+pg_log_warning_hint("Manual removal of recovery parameters is required from 'postgresql.auto.conf' (PostgreSQL %d+) or
'recovery.conf'(older versions)",
Do we need info about recovery.conf here since patch applies only to master?
Also I am not sure what scenario we are protecting against.
I set up logical replication via pg_createsubscriber first and did this:
./bin/pg_ctl -D standby -l standby stop -m fast
touch standby/recovery.signal
./bin/pg_ctl -D standby -l standby start
with restore_command = 'cp /home/postgresql-install/wal_archive/%f "%p"'
With no patch I got:
LOG: invalid record length at 0/A0000A0: expected at least 24, got 0
LOG: redo is not required
FATAL: recovery ended before configured recovery target was reached
But with patches applied I successfully started the standby.
Did I get the idea right?
Kind regards,
Ian Ilyasov.
On Sun, Oct 05, 2025 at 10:30:53PM +0000, Ilyasov Ian wrote: > Do we need info about recovery.conf here since patch applies only to > master? And actually, I think that you are pointing at a bug here. pg_createsubscriber does updates of the control file but it includes zero checks based on PG_CONTROL_VERSION to make sure that it is able to work with a version compatible with what's on disk. The CRC check would be reported as incorrect after calling get_controlfile(), but it's disturbing to claim that the control file looks corrupted. So, oops? [.. checks ..] The last control file update has been done in 44fe30fdab67, and attempting to run pg_createsubscriber on a v17 cluster leads to: $ pg_createsubscriber -D $HOME/data/5433 -P "host=/tmp port=5432" -d postgres pg_createsubscriber: error: control file appears to be corrupt So, yes, oops. We document that pg_cretesubscriber should have the same major version as the source and target servers, which is good. This error is no good, especially as checking it is just a few lines of code, and that the take is actually PG_CONTROL_VERSION for control file consistency. -- Michael
Вложения
Hi everyone,
Thank you for all the valuable feedback! I've improved the patches in the latest version.
> Based on the contents of the latest patch, we reset the parameters
> after promoting the node, and primary_conninfo only matters while we
> are in recovery, for a standby recovery WAL using the streaming
> replication protocol.
> after promoting the node, and primary_conninfo only matters while we
> are in recovery, for a standby recovery WAL using the streaming
> replication protocol.
Michael, thanks for helping! This fact simplifies the code. I put resetting the parameters exclusively in the `atexit` callback -- this approach seems neater to me. What do you think?
> Did I get the idea right?
Ian, yes, you got it right. The core issue occurs when postgres encounters a checkpoint during recovery, determines redo isn't needed (because there are no records after the checkpoint), but then fails with a fatal error because it cannot reach the specified LSN target (which is lower than the checkpoint LSN). I reckon this is a recovery logic issue, but I also believe the component that sets recovery parameters should be responsible for cleaning them up when they're no longer required.
Best wishes,
Alena Vinter
Вложения
On Mon, Oct 06, 2025 at 01:25:12PM +0700, Alena Vinter wrote: > Michael, thanks for helping! This fact simplifies the code. I put resetting > the parameters exclusively in the `atexit` callback -- this approach seems > neater to me. What do you think? I have been looking at this patch for a couple of hours, and I don't really like the result, for a variety of reasons. Some of the reasons come with the changes in recovery_gen.c themselves, as proposed in the patch, where the only thing we want to do it replace the contents of one file by the other, some other reasons come from the way pg_createsubscriber complicates its life on HEAD. There is no need to read the contents line by line and write them back, we can just do file manipulations. The reason why the patch does things this way currently is that it has zero knowledge of the file location where the recovery parameters contents are written, because this location is internal in recovery_gen.c, at least based on how pg_createsubscriber is written. And well, this fact is wrong even on HEAD: we know where the recovery parameters are written because pg_createsubscriber is documented as only supporting the same major version as the one where the tool has been compiled. So it is pointless to call WriteRecoveryConfig() with a connection object (using a PGconn pointer in this API is an artifact of pg_basebackup, where we support base backups taken from older major versions when using a newer version of the tool). pg_createsubscriber has no need to bind to this limitation, but we don't need to improve this point for the sake of this thread. The proposed patch is written without taking into account this issue, and the patch has a lot of logic that's not necessary. There is no point in referring to recovery.conf in the code and the tests, as well. Anyway, a second reason why I am not cool with the patch is that the contents written by pg_createsubscriber are entirely erased from existence, and I see a good point in keeping a trace of them at least for post-operation debugging purposes. With all that in mind, I came up with the following solution, which is able to fix what you want to address (aka not load any of the recovery parameters written by the tool if you reactivate a standby with a new signal file), while also satisfying my condition, which is to keep a track of the parameters written. Hence, let's: - forget about the changes in recovery_gen.c. - call WriteRecoveryConfig() with only one line added in the contents written to the "recovery" file (which is postgresql.conf.auto, okay): include_if_exists = 'pg_createsubscriber.conf' - Write the parameters generated by pg_createsubscriber to this new configuration file. - In the exit callback, call durable_rename() and rename pg_createsubscriber.conf to a pg_createsubscriber.conf.old. There is no need to cache the backend version or rely on a connection. We'll unlikely see a failure. Even if there is a failure, fixing the problem would be just to move or delete the extra file, and documenting that is simpler. All that points to the direction that we may not want to backpatch any of this, considering these changes as improvements in usability. -- Michael
Вложения
On Wed, Oct 8, 2025 at 7:43 AM Michael Paquier <michael@paquier.xyz> wrote: > With all that in mind, I came > up with the following solution, which is able to fix what you want to > address (aka not load any of the recovery parameters written by the > tool if you reactivate a standby with a new signal file), while also > satisfying my condition, which is to keep a track of the parameters > written. I'd like to back up one more step: why do we think that this is even a valid scenario in the first place? The original scenario involves running pg_createsubscriber and then putting the server back into recovery mode. But why is it valid to just put the server back into recovery mode at that point? That doesn't seem like something that you can just go do and expect it to work, especially if you don't check that other parameters have the values that you want. Generally, recovery is a one-time event, and once you exit, you only reenter on a newly-taken backup or after a crash or a pg_rewind. There are, of course, other times when you can force a server back into recovery without anything bad happening, but it's not my impression that we support that in general; it's something you can choose to do as an expert operator if you are certain that it's OK in your scenario. So my question is: why should we do anything at all about this? -- Robert Haas EDB: http://www.enterprisedb.com