Обсуждение: Clean up some pg_dump tests
Following [0], I did a broader analysis of some dubious or nonsensical
like/unlike combinations in the pg_dump tests.
This includes
1) Remove useless entries from "unlike" lists. Runs that are not
listed in "like" don't need to be excluded in "unlike".
2) Ensure there is always a "like" list, even if it is empty. This
makes the test more self-documenting.
3) Use predefined lists such as %full_runs where appropriate, instead
of listing all runs separately.
I also added code that checks 1 and 2 automatically and issues a message
for violations. (This is currently done with "diag". We could also
make it an error.)
The results are in the attached patch.
[0]:
https://www.postgresql.org/message-id/3ddf79f2-8b7b-a093-11d2-5c739bc64f86@eisentraut.org
Вложения
I tried this out. I agree it's a good change. BTW, this made me
realize that "unlike" is not a good name: maybe it should be called
"except".
On 2023-Oct-02, Peter Eisentraut wrote:
> + if (!defined($tests{$test}->{like}))
> + {
> + diag "missing like in test \"$test\"";
> + }
> + if ($tests{$test}->{unlike}->{$test_key} &&
> + !defined($tests{$test}->{like}->{$test_key}))
> + {
> + diag "useless unlike \"$test_key\" in test \"$test\"";
> + }
I would add quotes to the words "like" and "unlike" there. Otherwise,
these sentences are hard to parse. Also, some commentary on what this
is about seems warranted: maybe "Check that this test properly defines
which dumps the output should match on." or similar.
I didn't like using diag(), because automated runs will not alert to any
problems. Now maybe that's not critical, but I fear that people would
not notice problems if they are just noise in the output. Let's make
them test errors. fail() seems good enough: with the lines I quote
above and omitting the test corrections, I get this, which seems good
enough:
# Failed test 'useless unlike "binary_upgrade" in test "Disabled trigger on partition is not created"'
# at t/002_pg_dump.pl line 4960.
# Failed test 'useless unlike "clean" in test "Disabled trigger on partition is not created"'
# at t/002_pg_dump.pl line 4960.
[... a few others ...]
Test Summary Report
-------------------
t/002_pg_dump.pl (Wstat: 15104 (exited 59) Tests: 11368 Failed: 59)
Failed tests: 241, 486, 731, 1224, 1473, 1719, 1968, 2217
2463, 2712, 2961, 3207, 3452, 3941, 4190
4442, 4692, 4735-4736, 4943, 5094, 5189
5242, 5341, 5436, 5681, 5926, 6171, 6660
6905, 7150, 7395, 7640, 7683, 7762, 7887
7930, 7941, 8134, 8187, 8229, 8287, 8626
8871, 8924, 9023, 9170, 9269, 9457, 9515
9704, 9762, 10345, 10886, 10985, 11105
11123, 11134, 11327
Non-zero exit status: 59
Files=5, Tests=11482, 15 wallclock secs ( 0.43 usr 0.04 sys + 4.56 cusr 1.63 csys = 6.66 CPU)
Result: FAIL
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Ni aún el genio muy grande llegaría muy lejos
si tuviera que sacarlo todo de su propio interior" (Goethe)
On 09.10.23 11:20, Alvaro Herrera wrote: > I tried this out. I agree it's a good change. BTW, this made me > realize that "unlike" is not a good name: maybe it should be called > "except". right > I would add quotes to the words "like" and "unlike" there. Otherwise, > these sentences are hard to parse. Also, some commentary on what this > is about seems warranted: maybe "Check that this test properly defines > which dumps the output should match on." or similar. Done. I also moved the code a bit earlier, before the checks for supported compression libraries etc., so it runs even if those cause a skip. > I didn't like using diag(), because automated runs will not alert to any > problems. Now maybe that's not critical, but I fear that people would > not notice problems if they are just noise in the output. Let's make > them test errors. fail() seems good enough: with the lines I quote > above and omitting the test corrections, I get this, which seems good > enough: After researching this a bit more, I think "die" is the convention for problems in the test definitions themselves. (Otherwise, you're writing a test about the tests, which would be a bit weird.) The result is approximately the same.
Вложения
On 10.10.23 10:03, Peter Eisentraut wrote: > On 09.10.23 11:20, Alvaro Herrera wrote: >> I tried this out. I agree it's a good change. BTW, this made me >> realize that "unlike" is not a good name: maybe it should be called >> "except". > > right > >> I would add quotes to the words "like" and "unlike" there. Otherwise, >> these sentences are hard to parse. Also, some commentary on what this >> is about seems warranted: maybe "Check that this test properly defines >> which dumps the output should match on." or similar. > > Done. > > I also moved the code a bit earlier, before the checks for supported > compression libraries etc., so it runs even if those cause a skip. > >> I didn't like using diag(), because automated runs will not alert to any >> problems. Now maybe that's not critical, but I fear that people would >> not notice problems if they are just noise in the output. Let's make >> them test errors. fail() seems good enough: with the lines I quote >> above and omitting the test corrections, I get this, which seems good >> enough: > > After researching this a bit more, I think "die" is the convention for > problems in the test definitions themselves. (Otherwise, you're writing > a test about the tests, which would be a bit weird.) The result is > approximately the same. committed
On 02.10.23 09:12, Peter Eisentraut wrote: > 1) Remove useless entries from "unlike" lists. Runs that are not > listed in "like" don't need to be excluded in "unlike". > > 2) Ensure there is always a "like" list, even if it is empty. This > makes the test more self-documenting. > I also added code that checks 1 and 2 automatically and issues a message > for violations. I have recently discovered that the same code also exists separately in the test_pg_dump module test. This should probably be kept consistent. So here is a patch that adds the same checks there. In this case, we didn't need to fix any of the existing subtests. I plan to commit this soon if there are no concerns.