Обсуждение: 039_end_of_wal: error in "xl_tot_len zero" test
Hello, hackers, I believe there is a small problem in the 039_end_of_wal.pl's "xl_tot_len zero" test. It supposes that after immediate shutdown the server, upon startup recovery, should always produce a message matching "invalid record length at .*: wanted 24, got 0". However, if the circumstances are just right and we happened to hit exactly on the edge of WAL page, then the message on startup recovery would be "invalid magic number 0000 in log segment .*, offset .*". The test does not take that into account. Now, reproducing this is somewhat tricky, because exact position in WAL at the test time depends on what exactly initdb did, and that not only differs in different major verisons, but also depends on e.g. username length, locales available, and, perhaps, more. Even though originally this problem was found "in the wild" on one particular system on one particular code branch, I've written small helper patch to make reproduction on master easier, see 0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch. This patch adds single emit_message of (hopefully) the right size to make sure we hit end of WAL block right by the time we call $node->stop('immediate') in "xl_tot_len zero" test. With this patch, "xl_tot_len zero" test fails every time because the server writes "invalid magic number 0000 in log segment" while the test still only expects "invalid record length at .*: wanted 24, got 0". If course, this 0001 patch is *not* meant to be committed, but only as an issue reproduction helper. I can think of two possible fixes: 1. Update advance_out_of_record_splitting_zone to also avoid stopping at exactly the block end: my $page_offset = $end_lsn % $WAL_BLOCK_SIZE; - while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold) + while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold || $page_offset <= $SizeOfXLogShortPHD) { see 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch We need to compare with $SizeOfXLogShortPHD (and not with zero) because at that point, even though we didn't actually write out new WAL page yet, it's header is already in place in memory and taken in account for LSN reporting. 2. Alternatively, amend "xl_tot_len zero" test to expect "invalid magic number 0000 in WAL segment" message as well: $node->start; ok( $node->log_contains( + "invalid magic number 0000 in WAL segment|" . "invalid record length at .*: expected at least 24, got 0", $log_size ), see 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch I think it makes sense to backport whatever the final change would be to all branches with 039_end_of_wal (REL_12+). Any thoughts? Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru
Вложения
On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin <a.voloshin@postgrespro.ru> wrote: > I believe there is a small problem in the 039_end_of_wal.pl's > "xl_tot_len zero" test. It supposes that after immediate shutdown the > server, upon startup recovery, should always produce a message matching > "invalid record length at .*: wanted 24, got 0". However, if the > circumstances are just right and we happened to hit exactly on the edge > of WAL page, then the message on startup recovery would be "invalid > magic number 0000 in log segment .*, offset .*". The test does not take > that into account. Hi Anton, Thanks for figuring this out! Right, I see. I will look more closely when I'm back from summer vacation in a few days, but first reaction: > Now, reproducing this is somewhat tricky, because exact position in WAL > at the test time depends on what exactly initdb did, and that not only > differs in different major verisons, but also depends on e.g. username > length, locales available, and, perhaps, more. Even though originally > this problem was found "in the wild" on one particular system on one > particular code branch, I've written small helper patch to make > reproduction on master easier, see > 0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch. > > This patch adds single emit_message of (hopefully) the right size to > make sure we hit end of WAL block right by the time we call > $node->stop('immediate') in "xl_tot_len zero" test. With this patch, > "xl_tot_len zero" test fails every time because the server writes > "invalid magic number 0000 in log segment" while the test still only > expects "invalid record length at .*: wanted 24, got 0". If course, this > 0001 patch is *not* meant to be committed, but only as an issue > reproduction helper. > > I can think of two possible fixes: > > 1. Update advance_out_of_record_splitting_zone to also avoid stopping at > exactly the block end: > > my $page_offset = $end_lsn % $WAL_BLOCK_SIZE; > - while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold) > + while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold || > $page_offset <= $SizeOfXLogShortPHD) > { > see 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch > > We need to compare with $SizeOfXLogShortPHD (and not with zero) because > at that point, even though we didn't actually write out new WAL page > yet, it's header is already in place in memory and taken in account > for LSN reporting. I like the fact that this preserves the same end-of-WAL case that we're trying to test. I don't yet have an opinion on the best way to do it though. Would it be enough to add emit_message($node, 0) after advance_out_of_record_splitting_zone()? The thing about this one specific test that is different from the later ones is that it doesn't actually write a record header at all, it was relying purely on pre-existing trailing zeroes, but it assumed the page header would be valid. As you figured out, that isn't true if we were right on the page boundary. Perhaps advance_out_of_record_splitting_zone() followed by emit_message(0) would make that always true, even then? > 2. Alternatively, amend "xl_tot_len zero" test to expect "invalid magic > number 0000 in WAL segment" message as well: > > $node->start; > ok( $node->log_contains( > + "invalid magic number 0000 in WAL segment|" . > "invalid record length at .*: expected at least 24, got 0", > $log_size > ), > see 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch Tolerating the two different messages would weaken the test. > I think it makes sense to backport whatever the final change would be to > all branches with 039_end_of_wal (REL_12+). +1
On Fri, Jan 19, 2024 at 11:35:30AM +1300, Thomas Munro wrote: > On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin > <a.voloshin@postgrespro.ru> wrote: >> I believe there is a small problem in the 039_end_of_wal.pl's >> "xl_tot_len zero" test. It supposes that after immediate shutdown the >> server, upon startup recovery, should always produce a message matching >> "invalid record length at .*: wanted 24, got 0". However, if the >> circumstances are just right and we happened to hit exactly on the edge >> of WAL page, then the message on startup recovery would be "invalid >> magic number 0000 in log segment .*, offset .*". The test does not take >> that into account. > > Thanks for figuring this out! Right, I see. I will look more closely > when I'm back from summer vacation in a few days, but first reaction: Thomas, are you planning to look at that? -- Michael
Вложения
Hello, Thomas, On 19/01/2024 01:35, Thomas Munro wrote: > I don't yet have an opinion on the best way to > do it though. Would it be enough to add emit_message($node, 0) after > advance_out_of_record_splitting_zone()? Yes, indeed that seems to be enough. At least I could not produce any more "xl_tot_len zero" failures with that addition. I like this solution the best. > Tolerating the two different messages would weaken the test. I agree, I also don't really like this option. -- Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru
On Thu, Feb 15, 2024 at 10:40 PM Anton Voloshin <a.voloshin@postgrespro.ru> wrote: > On 19/01/2024 01:35, Thomas Munro wrote: > > I don't yet have an opinion on the best way to > > do it though. Would it be enough to add emit_message($node, 0) after > > advance_out_of_record_splitting_zone()? > > Yes, indeed that seems to be enough. At least I could not produce any > more "xl_tot_len zero" failures with that addition. > > I like this solution the best. Oh, it looks like this new build farm animal "skimmer" might be reminding us about this issue: https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD I don't know why it changed, but since this is an LSN/page alignment thing, it could be due to external things like an OS upgrade adding more locales or something that affects initdb. Will look soon and fix.
Thomas Munro <thomas.munro@gmail.com> writes: > Oh, it looks like this new build farm animal "skimmer" might be > reminding us about this issue: > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD > I don't know why it changed, At this point it seems indisputable that 7d2c7f08d9 is what broke skimmer, but that didn't go anywhere near WAL-related code, so how? My best guess is that that changed the amount of WAL generated by initdb just enough to make the problem reproduce on this animal. However, why's it *only* happening on this animal? The amount of WAL we generate isn't all that system-specific. regards, tom lane
On Mon, 6 May 2024 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote: > My best guess is that that changed the amount of WAL generated by > initdb just enough to make the problem reproduce on this animal. > However, why's it *only* happening on this animal? The amount of > WAL we generate isn't all that system-specific. I'd say that's a good theory as it's now passing again [1] after the recent system_views.sql change done in 521a7156ab. David [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-05-06%2017%3A43%3A38
David Rowley <dgrowleyml@gmail.com> writes: > On Mon, 6 May 2024 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> My best guess is that that changed the amount of WAL generated by >> initdb just enough to make the problem reproduce on this animal. >> However, why's it *only* happening on this animal? The amount of >> WAL we generate isn't all that system-specific. > I'd say that's a good theory as it's now passing again [1] after the > recent system_views.sql change done in 521a7156ab. Hm. It occurs to me that there *is* a system-specific component to the amount of WAL emitted during initdb: the number of locales that "locale -a" prints translates directly to the number of rows inserted into pg_collation. So maybe skimmer has a locale set that's a bit different from anybody else's, and that's what let it see this issue. regards, tom lane
On 13/05/2024 00:39, Tom Lane wrote: > Hm. It occurs to me that there *is* a system-specific component to > the amount of WAL emitted during initdb: the number of locales > that "locale -a" prints translates directly to the number of > rows inserted into pg_collation. [...] Yes. Another system-specific circumstance affecting WAL position is the name length of the unix user doing initdb. I've seen 039_end_of_wal failing consistently under user aaaaaaaa but working fine with aaaa, both on the same machine at the same time. To be more precise, on one particular machine under those particular circumstances (including set of locales) it would work for any username with length < 8 or >= 16, but would fail for length 8..15 (in bytes, not characters, if non-ASCII usernames were used). -- Anton Voloshin Postgres Professional, The Russian Postgres Company https://postgrespro.ru