Обсуждение: multi-platform, multi-locale regression tests
I'm looking for some ideas on how to deal with the regression tests for the per-column collation feature. These are the issues: * The feature only works on some platforms (tentatively: Linux, Windows). -> Possible solution: like xml test * The locale names are platform dependent, so there would need to be different test files per locale. * The test files need to use some non-ASCII characters. So far, I have encoded the test file in UTF-8 and run the tests with make check MULTIBYTE=UTF8. * Also, the allowed collations depend on the server encoding, so any solution for the previous point that results in the server encoding of the test database being variable will make the setup of the regression test SQL file more interesting. * Of course the actual sort orders could also be different on different platforms, but that problem can likely be contained. One possible way out is not to include these tests in the main test set and instead require manual invocation. Better ideas?
On Nov 9, 2010, at 12:18 PM, Peter Eisentraut wrote: > One possible way out is not to include these tests in the main test set > and instead require manual invocation. > > Better ideas? I've been talking to Nasby and Dunstan about adding a Test::More/pgTAP-based test suite to the core. It wouldn't run withthe usual core suite used by developers, which would continue to use pg_regress. But they could run it if they wanted(and had the prerequisites), and the build farm animals would run them regularly. The nice thing about using a TAP-based framework is that you can skip tests that don't meet platform requirements, and comparevalues within the tests, right where you write them, rather than in a separate file. You can also dynamically changehow you compare things depending on the environment, such as the locales that vary on different platforms. Thoughts? Best, David
2010/11/9 David E. Wheeler <david@kineticode.com>: > On Nov 9, 2010, at 12:18 PM, Peter Eisentraut wrote: > >> One possible way out is not to include these tests in the main test set >> and instead require manual invocation. >> >> Better ideas? > > I've been talking to Nasby and Dunstan about adding a Test::More/pgTAP-based test suite to the core. It wouldn't run withthe usual core suite used by developers, which would continue to use pg_regress. But they could run it if they wanted(and had the prerequisites), and the build farm animals would run them regularly. > > The nice thing about using a TAP-based framework is that you can skip tests that don't meet platform requirements, andcompare values within the tests, right where you write them, rather than in a separate file. You can also dynamicallychange how you compare things depending on the environment, such as the locales that vary on different platforms. > > Thoughts? Are you thinking of a contrib module 'pgtap' that we can use to accomplish the optionnal regression tests ? -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
On Nov 9, 2010, at 2:42 PM, Cédric Villemain wrote: > Are you thinking of a contrib module 'pgtap' that we can use to > accomplish the optionnal regression tests ? Oh, if the project wants it in contrib, sure. Otherwise I'd probably just have the test stuff include it somehow. David
2010/11/9 David E. Wheeler <david@kineticode.com>: > On Nov 9, 2010, at 2:42 PM, Cédric Villemain wrote: > >> Are you thinking of a contrib module 'pgtap' that we can use to >> accomplish the optionnal regression tests ? > > Oh, if the project wants it in contrib, sure. Otherwise I'd probably just have the test stuff include it somehow. Adding a unit test layer shipped with postgresql sounds good to me. And pgTAP can claim to be platform agnostic. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
On tis, 2010-11-09 at 14:00 -0800, David E. Wheeler wrote: > I've been talking to Nasby and Dunstan about adding a Test::More/pgTAP-based test suite to the core. It wouldn't run withthe usual core suite used by developers, which would continue to use pg_regress. But they could run it if they wanted(and had the prerequisites), and the build farm animals would run them regularly. I'd welcome something like that, but I'm not sure that that's the best overall solution to this particular problem in the short run. But it would be great to have anyway.
> Peter Eisentraut wrote: > On tis, 2010-11-09 at 14:00 -0800, David E. Wheeler wrote: > >> I've been talking to Nasby and Dunstan about adding a >> Test::More/pgTAP-based test suite to the core. It wouldn't run >> with the usual core suite used by developers, which would continue >> to use pg_regress. But they could run it if they wanted (and had >> the prerequisites), and the build farm animals would run them >> regularly. > > I'd welcome something like that, but I'm not sure that that's the > best overall solution to this particular problem in the short run. > But it would be great to have anyway. For the Serializable Snapshot Isolation (SSI) patch I needed a test suite which would handle concurrent sessions which interleaved statements in predictable ways. I was told pgTAP wasn't a good choice for that and went with Markus Wanner's dtester package. The SSI patch adds a "dcheck" build target which is not included in any others to run the dtester tests. I don't know if dtester meets the other needs people have, or whether this is a complementary approach, but it seemed worth mentioning. -Kevin
On Nov 10, 2010, at 5:31 AM, Kevin Grittner wrote: > For the Serializable Snapshot Isolation (SSI) patch I needed a test > suite which would handle concurrent sessions which interleaved > statements in predictable ways. I was told pgTAP wasn't a good > choice for that and went with Markus Wanner's dtester package. The > SSI patch adds a "dcheck" build target which is not included in any > others to run the dtester tests. Right. pgTAP doesn't run tests, it's just a collection of assertion functions written in SQL and PL/pgSQL. It could havebeen used via a forking Perl script that would connect to the proper boxes, run the tests, collect the results, etc.But it clearly would have been a PITA, and the path of least resistance is often the best solution when hacking. Goingwith dcheck, which already did what you wanted, was clearly the right choice. Hopefully we can have the build farm animals run the dcheck target once SSI is committed. Best, David
On Wed, Nov 10, 2010 at 08:33:13AM -0800, David Wheeler wrote: > On Nov 10, 2010, at 5:31 AM, Kevin Grittner wrote: > > > For the Serializable Snapshot Isolation (SSI) patch I needed a > > test suite which would handle concurrent sessions which > > interleaved statements in predictable ways. I was told pgTAP > > wasn't a good choice for that and went with Markus Wanner's > > dtester package. The SSI patch adds a "dcheck" build target which > > is not included in any others to run the dtester tests. > > Right. pgTAP doesn't run tests, it's just a collection of assertion > functions written in SQL and PL/pgSQL. It could have been used via > a forking Perl script that would connect to the proper boxes, run > the tests, collect the results, etc. But it clearly would have been > a PITA, and the path of least resistance is often the best solution > when hacking. Going with dcheck, which already did what you wanted, > was clearly the right choice. > > Hopefully we can have the build farm animals run the dcheck target > once SSI is committed. Does Perl have some kind of concurrency-controlled test framework? Cheers, David. -- David Fetter <david@fetter.org> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fetter@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate
On 11/10/2010 08:31 AM, Kevin Grittner wrote: > I don't know if dtester meets the other needs people have, or whether > this is a complementary approach, but it seemed worth mentioning. Where is this available? Is it self-contained? And what does it require? cheers andrew
On Nov 10, 2010, at 9:48 AM, Andrew Dunstan wrote: >> I don't know if dtester meets the other needs people have, or whether >> this is a complementary approach, but it seemed worth mentioning. > > > Where is this available? Is it self-contained? And what does it require? Python. http://www.bluegap.ch/projects/dtester/ Best, David
"David E. Wheeler" <david@kineticode.com> wrote: > On Nov 10, 2010, at 9:48 AM, Andrew Dunstan wrote: >> Where is this available? Is it self-contained? And what does it >> require? > > Python. And some optional python packages, like twisted. > http://www.bluegap.ch/projects/dtester/ It looks like I may have raised the issue at a particularly inopportune time -- it looks like maybe Markus is reloading his git repo based on the new "official" git repo for PostgreSQL. -Kevin
On Wed, Nov 10, 2010 at 15:31, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: > For the Serializable Snapshot Isolation (SSI) patch I needed a test > suite which would handle concurrent sessions which interleaved > statements in predictable ways. I was told pgTAP wasn't a good > choice for that and went with Markus Wanner's dtester package. Sounds like you could use pgTAP with dblink to do the same? :) Regards, Marti
Hi, On 11/10/2010 07:28 PM, Kevin Grittner wrote: > It looks like I may have raised the issue at a particularly > inopportune time -- it looks like maybe Markus is reloading his git > repo based on the new "official" git repo for PostgreSQL. Thanks for noticing me. The dtester repository should be there again. Sorry for the inconvenience. Regards Markus
On ons, 2010-11-10 at 07:31 -0600, Kevin Grittner wrote: > I don't know if dtester meets the other needs people have, or whether > this is a complementary approach, but it seemed worth mentioning. The right tool for the right job, I'd say. One thing to aim for, perhaps, would be to make all tools in use produce a common output format, at least optionally, so that creating a common test run dashboard or something like that is more easily possible. TAP and xUnit come to mind.
Marti Raudsepp <marti@juffo.org> wrote: > Sounds like you could use pgTAP with dblink to do the same? :) I had never read through the docs for dblink until you posted this. In fact, it appears that some testing of proper SSI behavior can be added to standard regression tests with dblink (without needing pgTAP) if there is some way to allow a contrib module like that to be used. Would I have to add the SSI tests to the dblink regression tests, or is there some more graceful way that might be made to work? I don't think this would be a sane way to *replace* the dcheck tests, but it might be a way to work *some* testing of SSI into a more frequently run test set. -Kevin
On 11/10/2010 05:06 PM, Kevin Grittner wrote: > Marti Raudsepp<marti@juffo.org> wrote: > >> Sounds like you could use pgTAP with dblink to do the same? :) > > I had never read through the docs for dblink until you posted this. > In fact, it appears that some testing of proper SSI behavior can be > added to standard regression tests with dblink (without needing > pgTAP) if there is some way to allow a contrib module like that to > be used. Would I have to add the SSI tests to the dblink regression > tests, or is there some more graceful way that might be made to > work? > > I don't think this would be a sane way to *replace* the dcheck > tests, but it might be a way to work *some* testing of SSI into a > more frequently run test set. We already use some contrib stuff in the regression tests. (It really is time we stopped calling it contrib.) cheers andrew
On Nov 10, 2010, at 2:15 PM, Andrew Dunstan wrote: > We already use some contrib stuff in the regression tests. (It really is time we stopped calling it contrib.) Call them "core extensions". Works well considering Dimitri's work, which explicitly makes them extensions. So maybe changethe directory name to "extensions" or "ext"? Best, David
"David E. Wheeler" <david@kineticode.com> writes: > On Nov 10, 2010, at 2:15 PM, Andrew Dunstan wrote: >> We already use some contrib stuff in the regression tests. (It really is time we stopped calling it contrib.) > Call them "core extensions". Works well considering Dimitri's work, which explicitly makes them extensions. So maybe changethe directory name to "extensions" or "ext"? We've been calling it "contrib" for a dozen years, so that name is pretty well baked in by now. IMO renaming it is pointless and will accomplish little beyond creating confusion and making back-patches harder. (And no, don't you dare breathe a word about git making that all automagically better. I have enough back-patching experience with git by now to be unimpressed; in fact, I notice that its rename-tracking feature falls over entirely when trying to back-patch further than 8.3. Apparently there's some hardwired limit on the number of files it can cope with.) regards, tom lane
On Nov 10, 2010, at 3:17 PM, Tom Lane wrote: > We've been calling it "contrib" for a dozen years, so that name is > pretty well baked in by now. IMO renaming it is pointless and will > accomplish little beyond creating confusion and making back-patches > harder. *Shrug*. Just change the name in the docs, then. It's currently "Additional Supplied Modules". Maybe just change that to"Additional Supplied Extensions" or, even better, "Core Extensions"? Best, David > (And no, don't you dare breathe a word about git making that > all automagically better. I have enough back-patching experience with > git by now to be unimpressed; in fact, I notice that its rename-tracking > feature falls over entirely when trying to back-patch further than 8.3. > Apparently there's some hardwired limit on the number of files it can > cope with.) How often do you have to back-patch contrib, anyway? David
On 11/10/2010 06:17 PM, Tom Lane wrote: > "David E. Wheeler"<david@kineticode.com> writes: >> On Nov 10, 2010, at 2:15 PM, Andrew Dunstan wrote: >>> We already use some contrib stuff in the regression tests. (It really is time we stopped calling it contrib.) >> Call them "core extensions". Works well considering Dimitri's work, which explicitly makes them extensions. So maybe changethe directory name to "extensions" or "ext"? > We've been calling it "contrib" for a dozen years, so that name is > pretty well baked in by now. IMO renaming it is pointless and will > accomplish little beyond creating confusion and making back-patches > harder. The current name causes constant confusion. It's a significant misnomer, and leads people to distrust the code. There might be reasons not to change, but you should at least recognize why the suggestion is being made. > (And no, don't you dare breathe a word about git making that > all automagically better. I have enough back-patching experience with > git by now to be unimpressed; in fact, I notice that its rename-tracking > feature falls over entirely when trying to back-patch further than 8.3. > Apparently there's some hardwired limit on the number of files it can > cope with.) That's very sad. Did you file a bug? cheers andrew
On Wed, Nov 10, 2010 at 6:39 PM, David E. Wheeler <david@kineticode.com> wrote: > On Nov 10, 2010, at 3:17 PM, Tom Lane wrote: >> We've been calling it "contrib" for a dozen years, so that name is >> pretty well baked in by now. IMO renaming it is pointless and will >> accomplish little beyond creating confusion and making back-patches >> harder. > > *Shrug*. Just change the name in the docs, then. It's currently "Additional Supplied Modules". Maybe just change that to"Additional Supplied Extensions" or, even better, "Core Extensions"? I don't see any value to that change at all. Additional Supplied Modules is a fine name. If there's a problem here, it's with the name "contrib", but I don't see that there's enough value in changing that to be worth the hassle. I think the big hurdle with contrib isn't that it's called "contrib" but that it's not part of the core server and, in many cases, enabling a contrib module means editing postgresql.conf and bouncing the server. Of course, there are certainly SOME people who wouldn't mind editing postgresql.conf and bouncing the server but are scared off by the name contrib, but I suspect the hassle-factor is the larger issue by a substantial margin. >> (And no, don't you dare breathe a word about git making that >> all automagically better. I have enough back-patching experience with >> git by now to be unimpressed; in fact, I notice that its rename-tracking >> feature falls over entirely when trying to back-patch further than 8.3. >> Apparently there's some hardwired limit on the number of files it can >> cope with.) > > How often do you have to back-patch contrib, anyway? [rhaas pgsql]$ git log --format=oneline `git merge-base REL9_0_STABLE master`..REL9_0_STABLE | wc -l 247 [rhaas pgsql]$ git log --format=oneline `git merge-base REL9_0_STABLE master`..REL9_0_STABLE contrib | wc -l 20 -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Nov 10, 2010 at 7:01 PM, Andrew Dunstan <andrew@dunslane.net> wrote: > The current name causes constant confusion. It's a significant misnomer, and > leads people to distrust the code. There might be reasons not to change, but > you should at least recognize why the suggestion is being made. Is it your position that contrib code is as well-vetted as core code? >> (And no, don't you dare breathe a word about git making that >> all automagically better. I have enough back-patching experience with >> git by now to be unimpressed; in fact, I notice that its rename-tracking >> feature falls over entirely when trying to back-patch further than 8.3. >> Apparently there's some hardwired limit on the number of files it can >> cope with.) > > That's very sad. Did you file a bug? It's intentional behavior. It gives up when there are too many differences to avoid being slow. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 11/10/2010 07:51 PM, Robert Haas wrote: > On Wed, Nov 10, 2010 at 7:01 PM, Andrew Dunstan<andrew@dunslane.net> wrote: >> The current name causes constant confusion. It's a significant misnomer, and >> leads people to distrust the code. There might be reasons not to change, but >> you should at least recognize why the suggestion is being made. > Is it your position that contrib code is as well-vetted as core code? > > A damn sight more than it used to be. I claim a bit of credit for that - before the buildfarm existed it was quite poorly tested, but we can't get away with that any more. (Ditto PLs and ECPG once we added those into the buildfarm mix.) Of course, there are odd corners in the code. But hstore, for example, has just had a major makeover, and pgcrypto is pretty well maintained. Some other modules are less well loved. There are a few small bits of the core code that have cobwebs too. cheers andrew
On Wed, Nov 10, 2010 at 8:05 PM, Andrew Dunstan <andrew@dunslane.net> wrote: > On 11/10/2010 07:51 PM, Robert Haas wrote: >> On Wed, Nov 10, 2010 at 7:01 PM, Andrew Dunstan<andrew@dunslane.net> >> wrote: >>> >>> The current name causes constant confusion. It's a significant misnomer, >>> and >>> leads people to distrust the code. There might be reasons not to change, >>> but >>> you should at least recognize why the suggestion is being made. >> >> Is it your position that contrib code is as well-vetted as core code? > > A damn sight more than it used to be. I claim a bit of credit for that - > before the buildfarm existed it was quite poorly tested, but we can't get > away with that any more. (Ditto PLs and ECPG once we added those into the > buildfarm mix.) Of course, there are odd corners in the code. But hstore, > for example, has just had a major makeover, and pgcrypto is pretty well > maintained. Some other modules are less well loved. There are a few small > bits of the core code that have cobwebs too. Fair enough. I think overall our code quality is good, and, over time, it's probably risen both within and outside core. Still, I think renaming contrib would likely be a lot more hassle than it's worth, and I don't think it would do much to remove the central issue, which is that installing extensions is a pain in the neck. Dimitri's work will help with that somewhat, but there's still that nasty business of needing to update shared_preload_libraries and bounce the server, at least for some modules. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > I think the big hurdle with contrib isn't > that it's called "contrib" but that it's not part of the core server > and, in many cases, enabling a contrib module means editing > postgresql.conf and bouncing the server. Of course, there are > certainly SOME people who wouldn't mind editing postgresql.conf and > bouncing the server but are scared off by the name contrib, but I > suspect the hassle-factor is the larger issue by a substantial margin. You're forgetting about the dump and restore problems you now have as soon as you're using any contrib. They are more visible at upgrade time, of course, but still bad enough otherwise. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Robert Haas <robertmhaas@gmail.com> writes: > work will help with that somewhat, but there's still that nasty > business of needing to update shared_preload_libraries and bounce the > server, at least for some modules. We have 45 contribs (ls -l contrib | grep -c ^d), out of which: auto_explain is shared_preload_libraries but I think could be local_preload_libraries pg_stat_statements is shared_preload_libraries (needs SHM) and that's it So my reading is that currently the only contrib module that needs more than a server reload is pg_stat_statements, because it needs some shared memory. Am I missing anything? Ok, now I'll add the custom_variable_classes setting to the control files in the extension's patch for the contribs that expose some of them. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On 11/10/2010 07:51 PM, Robert Haas wrote: > (And no, don't you dare breathe a word about git making that > all automagically better. I have enough back-patching experience with > git by now to be unimpressed; in fact, I notice that its rename-tracking > feature falls over entirely when trying to back-patch further than 8.3. > Apparently there's some hardwired limit on the number of files it can > cope with.) >> That's very sad. Did you file a bug? > It's intentional behavior. It gives up when there are too many > differences to avoid being slow. We should adopt that philosophy. I suggest we limit all tables in future to 1m rows in the interests of speed. cheers andrew
On Thu, Nov 11, 2010 at 8:28 AM, Andrew Dunstan <andrew@dunslane.net> wrote: >> It's intentional behavior. It gives up when there are too many >> differences to avoid being slow. And, it's configurable, at least to diff and merge. If it's not available in all the other porcelains, yes, that would be bugs that should be fixed: -l<num> The -M and -C options require O(n^2) processing time where n is the number of potential rename/copy targets. This option prevents rename/copy detection from running if the number of rename/copy targets exceeds the specified number. And can even be specified as config options diff.renameLimit and merge.renameLimit. > We should adopt that philosophy. I suggest we limit all tables in future to > 1m rows in the interests of speed. As long as it's configurable, and if it would make operations on smaller tables faster, than go for it. And we should by defualt limit shared_buffers to 32MB. Oh wait. There are always tradeoffs when picking defaults, a-la-postgresql.conf. We as a community are generally pretty quick to pick up the "defaults are very conservative, make sure you tune ..." song when people complain about "pg being too slow" ;-) a. -- Aidan Van Dyk Create like a god, aidan@highrise.ca command like a king, http://www.highrise.ca/ work like a slave.
Aidan Van Dyk <aidan@highrise.ca> writes: >>> It's intentional behavior. �It gives up when there are too many >>> differences to avoid being slow. > And, it's configurable, at least to diff and merge. If it's not > available in all the other porcelains, yes, that would be bugs that > should be fixed: FWIW, I was seeing this with git cherry-pick, whose man page gives no hint of supporting any such option. > -l<num> > The -M and -C options require O(n^2) processing time where > n is the number of potential > rename/copy targets. This option prevents rename/copy > detection from running if the number > of rename/copy targets exceeds the specified number. Given that we have, in fact, never renamed any files in the history of the project, I'm wondering exactly why it thinks that the number of potential rename/copy targets isn't zero. The whole thing smells broken to me, which is why I am unhappy about the idea of suddenly starting to depend on it in a big way. regards, tom lane
On Thu, Nov 11, 2010 at 17:24, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Given that we have, in fact, never renamed any files in the history of > the project, I'm wondering exactly why it thinks that the number of > potential rename/copy targets isn't zero. The whole thing smells > broken to me, which is why I am unhappy about the idea of suddenly > starting to depend on it in a big way. Because git doesn't do "rename tracking" at all -- a rename operation is no different from a delete+add operation. Instead it tracks how lines of code move around in the tree: https://git.wiki.kernel.org/index.php/GitFaq#Why_does_git_not_.22track.22_renames.3F Regards, Marti
Aidan Van Dyk <aidan@highrise.ca> writes: > Can you share what commit you were trying to cherry-pick, and what > your resulting commit was? I can try and take a quick look at them > and see if there is something obviously fishy with how git's trying to > merge the new commit on the old tree... See yesterday's line_construct_pm() patches. I committed in HEAD and then did "git cherry-pick master" in each back branch. These all worked, which would be the minimum expectation for a single-file patch against a function that hasn't changed since 1999. But in the older branches it bleated about shutting off rename detection because of too many files (sorry, don't have the exact message in front of me, but that was the gist of it). Not the sort of thing that gives one a warm feeling about the tool. I've seen this before when trying to use git cherry-pick, but I forget on which other patches exactly. Oh, for the record: $ git --version git version 1.7.3 regards, tom lane
Marti Raudsepp <marti@juffo.org> writes: > On Thu, Nov 11, 2010 at 17:24, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Given that we have, in fact, never renamed any files in the history of >> the project, I'm wondering exactly why it thinks that the number of >> potential rename/copy targets isn't zero. > Because git doesn't do "rename tracking" at all -- a rename operation > is no different from a delete+add operation. Instead it tracks how > lines of code move around in the tree: > https://git.wiki.kernel.org/index.php/GitFaq#Why_does_git_not_.22track.22_renames.3F Hmmm ... so rename tracking is O(N^2) in the total number of patches applied, or lines patched, or some such measure, between the branches you're trying to patch between? Ugh. Doesn't sound like something we want to grow dependent on. regards, tom lane
On 11/11/2010 10:17 AM, Aidan Van Dyk wrote: > >> We should adopt that philosophy. I suggest we limit all tables in future to >> 1m rows in the interests of speed. > As long as it's configurable, and if it would make operations on > smaller tables faster, than go for it. > > And we should by defualt limit shared_buffers to 32MB. Oh wait. > > There are always tradeoffs when picking defaults, a-la-postgresql.conf. > > We as a community are generally pretty quick to pick up the "defaults > are very conservative, make sure you tune ..." song when people > complain about "pg being too slow" > > ;-) > Well, I was of course being facetious. But since you mention it, Postgres is conservative about its defaults because it's a server. I don't think quite the same considerations apply to developer software that will be running on a workstation. And Tom's complaint was about what he saw as incorrect behavior. Our defaults might hurt performance, but I don't think they trade speed for incorrect behavior. Anyway, revenons à nos moutons. cheers andrew
On Thu, Nov 11, 2010 at 6:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Marti Raudsepp <marti@juffo.org> writes: >> On Thu, Nov 11, 2010 at 17:24, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Given that we have, in fact, never renamed any files in the history of >>> the project, I'm wondering exactly why it thinks that the number of >>> potential rename/copy targets isn't zero. > >> Because git doesn't do "rename tracking" at all -- a rename operation >> is no different from a delete+add operation. Instead it tracks how >> lines of code move around in the tree: >> https://git.wiki.kernel.org/index.php/GitFaq#Why_does_git_not_.22track.22_renames.3F > > Hmmm ... so rename tracking is O(N^2) in the total number of patches > applied, or lines patched, or some such measure, between the branches > you're trying to patch between? Ugh. Doesn't sound like something > we want to grow dependent on. No, it's dependant on files changed between two trees. It does not analyze history when doing rename tracking. Default limit is 200. It should be easy to calculate whats needed for Postgres. -- marko
On 11/10/2010 10:58 PM, Peter Eisentraut wrote: > One thing to aim for, perhaps, would be to make all tools in use produce > a common output format, at least optionally, so that creating a common > test run dashboard or something like that is more easily possible. TAP > and xUnit come to mind. Note that dtester features a TAP reporter. However, the way Kevin uses dtester, that probably won't give useful results. (As he uses custom print statements to do more detailed reporting than TAP could ever give you). Regards Markus Wanner
Markus Wanner wrote: > Note that dtester features a TAP reporter. However, the way Kevin > uses dtester, that probably won't give useful results. (As he uses > custom print statements to do more detailed reporting than TAP > could ever give you). According to the TAP draft standard, any line not beginning with 'ok', 'not ok', or '#' is a comment and must be ignored by a TAP consumer. They are considered comments, and the assumption is that there can be many of them. http://testanything.org/wiki/index.php/TAP_at_IETF:_Draft_Standard Since my more detailed output would all be considered ignorable comments, I think it's OK. It's there for human readers who want more detail, but otherwise must have no impact on a compliant consumer. -Kevin
On 11/12/2010 02:27 PM, Kevin Grittner wrote: > According to the TAP draft standard, any line not beginning with > 'ok', 'not ok', or '#' is a comment and must be ignored by a TAP > consumer. They are considered comments, and the assumption is that > there can be many of them. I stand corrected. Do you actually use the TapReporter? Maybe I confused with the CursesReporter, which gets rather confused by custom output. Regards Markus Wanner
Markus Wanner wrote: > I stand corrected. Do you actually use the TapReporter? No. I know so little about TAP that I wasn't aware that dtester output was in the TAP format until I saw your post on this thread, so I went searching for the format to see what I might do to become more compliant -- and found that through sheer luck I happened to be compliant with the proposed spec. :-) > Maybe I confused with the CursesReporter, which gets rather > confused by custom output. I can check what that requires. Perhaps I can cause the detail output to not confuse that. [off to check...] -Kevin
On 11/12/2010 02:43 PM, Kevin Grittner wrote: > Markus Wanner wrote: > >> I stand corrected. Do you actually use the TapReporter? > > No. I know so little about TAP that I wasn't aware that dtester > output was in the TAP format Well, there are three kinds of reporters: StreamReporter, TapReporter and CursesReporter. By default, either curser or stream is chosen, depending on whether or not dtester thinks its stdout is a terminal or not. To make dtester report in TAP format, you'd need to specify that upon creation of the Runner: runner = dtester.runner.Runner( \ reporter=dtester.reporter.StreamReporter( \ sys.stdout, sys.stderr, showTimingInfo=False)) > I can check what that requires. Perhaps I can cause the detail > output to not confuse that. [off to check...] The CursesReporter moves up and down the lines to write results to concurrently running tests. It's only useful on a terminal and certainly gets confused by anything that moves the cursor (which a plain 'print' certainly does). The best solution would probably be to allow the reporters to write out comment lines. (However, due to the ability of running tests concurrently, these comment lines could only be appended at the end, without clear visual connection to a specific test. As long as you are only running on test at a time, that certainly doesn't matter). Regards Markus Wanner
Markus Wanner wrote: > Well, there are three kinds of reporters: StreamReporter, > TapReporter and CursesReporter. By default, either curser or stream > is chosen, depending on whether or not dtester thinks its stdout is > a terminal or not. > The CursesReporter moves up and down the lines to write results to > concurrently running tests. It's only useful on a terminal and > certainly gets confused by anything that moves the cursor (which a > plain 'print' certainly does). Ah, well that explains some problems I've had with getting my output to behave quite like I wanted! Thanks for that summary! I'm pretty sure I've been getting the CursesReporter; I'll switch to TapReporter. > The best solution would probably be to allow the reporters to write > out comment lines. (However, due to the ability of running tests > concurrently, these comment lines could only be appended at the > end, without clear visual connection to a specific test. As long as > you are only running on test at a time, that certainly doesn't > matter). Not sure what the best answer is for Curses -- would it make any sense to output a disk file with one of the other formats in addition to the screen, and direct detail to the file? Perhaps a separate file for each test, to make it easy to keep comments associated with the test? (Just brainstorming here.) -Kevin
On Nov 12, 2010, at 6:28 AM, Kevin Grittner wrote: >> The CursesReporter moves up and down the lines to write results to >> concurrently running tests. It's only useful on a terminal and >> certainly gets confused by anything that moves the cursor (which a >> plain 'print' certainly does). > > Ah, well that explains some problems I've had with getting my output > to behave quite like I wanted! Thanks for that summary! I'm pretty > sure I've been getting the CursesReporter; I'll switch to > TapReporter. Oh, that would be great, because I can then have the TAP stuff I plan to add just run your tests and harness the resultsalong with everything else. Best, David
"David E. Wheeler" wrote: > On Nov 12, 2010, at 6:28 AM, Kevin Grittner wrote: >> I'll switch to TapReporter. > > Oh, that would be great, because I can then have the TAP stuff I > plan to add just run your tests and harness the results along with > everything else. I switched it with this patch: http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=da7932fd5d71a64e1a2ebba598dfe6874c978d2d I have a couple questions: (1) Any idea why it finds the success of the tests unexpected?: # ri-trigger: test started ['wxry1', 'c1', 'r2', 'wyrx2', 'c2'] committed ['wxry1', 'r2', 'c1', 'wyrx2', 'c2'] rolled back ['wxry1', 'r2', 'wyrx2', 'c1', 'c2'] rolled back ['wxry1', 'r2', 'wyrx2', 'c2', 'c1'] rolled back ['r2', 'wxry1', 'c1', 'wyrx2', 'c2'] rolled back ['r2', 'wxry1', 'wyrx2', 'c1', 'c2'] rolled back ['r2', 'wxry1', 'wyrx2', 'c2', 'c1'] rolled back ['r2', 'wyrx2', 'wxry1', 'c1', 'c2'] rolled back ['r2', 'wyrx2', 'wxry1', 'c2', 'c1'] rolled back ['r2', 'wyrx2', 'c2', 'wxry1', 'c1'] committed rollback required: 8 / 8 commit required: 2 / 2 commit preferred: 0 / 0 ok 3 - ri-trigger (UNEXPECTED) (2) If I wanted something to show in the TAP output, like the three counts at the end of the test, what's the right way to do that? (I suspect that printing with a '#' character at the front of the line would do it, but that's probably not the proper way...) -Kevin
On Nov 12, 2010, at 12:39 PM, Kevin Grittner wrote: > (2) If I wanted something to show in the TAP output, like the three > counts at the end of the test, what's the right way to do that? (I > suspect that printing with a '#' character at the front of the line > would do it, but that's probably not the proper way...) That is the proper way, but dtest might have a method for you to do that. If not, just do this before you print: $printme =~ s/^/# /g; Best, David
I wrote: > (1) Any idea why it finds the success of the tests unexpected? Should anyone else run into this, it's controlled by this in the test scheduling definitions (the tdef values): 'xfail': True There are other test flags you can override here, like 'skip' to skip a test. -Kevin
Kevin, On 11/13/2010 01:28 AM, Kevin Grittner wrote: > Should anyone else run into this, it's controlled by this in the test > scheduling definitions (the tdef values): > > 'xfail': True > > There are other test flags you can override here, like 'skip' to skip > a test. Correct. Looks like dtester urgently needs documentation... Regards Markus Wanner