12

We have a massive project with almost no unit tests at all. I would like to ensure from now on that the developers commit new features (or bugs!) without minimal coverage for corresponding unit tests.

What are some ways to enforce this?

We use many tools, so perhaps I can use a plugin (jira, greenhopper, fisheye, sonar, hudson). I was also thinking perhaps a Subversion pre-commit hook, the Commit Acceptance Plugin for jira, or something equivalent.

Thoughts?

Dave Jarvis
  • 30,436
  • 41
  • 178
  • 315
  • Do you mean "I would like to ensure ... that the developers *don't* commit new features without minimal coverage"? It sounds like you want a rule of 0-coverage as it is. – Matthew Gilliard Mar 02 '11 at 23:15

3 Answers3

6

Sonar (wonderful tool by the way) with Build breaker plugin can break your Hudson build when some metrics don't meet specified rules. You can setup such a rule in Sonar that would trigger an alert (eventually causing the build to fail) when the coverage is below given point. The only drawback is that you probably want the coverage to grow, so you must remember to increase the alert level every day to the current value.

Simon Hellinger
  • 1,309
  • 12
  • 19
Tomasz Nurkiewicz
  • 334,321
  • 69
  • 703
  • 674
  • Excellent suggestion, thanks. I can think of a drawback though, related to the fact that the application does not have a minimum coverage nowadays (and will take some serious time to achieve, this thing is big). I want to focus only on new features/bug fixes/commits, although of course maybe I missunderstood your standpoint – Nicolas Rodríguez Seara Mar 02 '11 at 21:15
  • 1
    Well, so setup the minimal coverage to 0% or 0,5% or whatever you have now. If after day one your coverage is 0,7%, increase the alert level to 0,7%. If some developer commits the code without the tests, it is very likely that global coverage drops to 0,65%, which is lower than today's level and will trigger build failure. At least your coverage won't decrease. – Tomasz Nurkiewicz Mar 02 '11 at 21:25
  • If it only checked the coverage of the piece of code (file) being checked in, then you could set the coverage level to like 10-30% to start. This would force a small amount of backfill with every checkin without forcing them to hit the entire codebase to get a checkin to work. – Bill K Mar 02 '11 at 21:28
  • I would add one caveat. Imagine the scenario where someone replaces 2kLOC of the better-covered code with one call to a 3rd party library. Surely a good move, right? But your project's overall test coverage has decreased. – Matthew Gilliard Mar 02 '11 at 23:12
  • 2
    A better metric than %age coverage is simply `number of lines not covered` - the lower the better. – quamrana Mar 03 '11 at 14:25
2

What you want to do is determine what is new code, and verify that the new code is covered by some test.

Determining code coverage in general can be accomplished with any of a variety of test coverage tools. Many test coverage tools can simply reinstrument your entire application and then you can run tests to determine coverage.

Our (Semantic Designs') line of Test Coverage tools can determine, from a changed-file list, just the individual files that need to re-instrumented, and with careful test organization, just the tests that need to be reexecuted. This will minimize the cost of re-running your tests, and you'll still end with the same overall coverage data. (Actually, these tools detect what tests need to be made based on changes at the method level).

Once you have test coverage data, what you want to know is the the specifically new code is covered by some tests. You can do this sloppily with just test coverage data if you know which files changed, by insisting the changed files have 100% coverage. That probably doesn't work in practice.

You could instead take advantage of SD's Smart Differencer tools to give a more precise answer. These tools compare two language files, and indicate where the changes are using the language syntax (e.g., expression, statement, declaration, method body, not just changed source lines) and conceptual editing operations (move, copy, delete, insert, rename-identifier-within-block). SmartDifferencer deltas tend to be both smaller and finer than what you would get from a plain diff tool.

It is easy to extract from the SmartDifferencer's output a list of lines changed. One could compute the intersection of that, per file, with the lines covered by the test coverage data. If the changed-lines are not all entirely within the set of covered lines, then "new" code hasn't been tested and you can raise a flag, stop a checkin, or whatever to signal that your checking policy has been violated.

The TestCoverage and SmartDifferencer tools don't come out-of-the-box with this computation done for you, but it should be a pretty easy script to implement.

Ira Baxter
  • 93,541
  • 22
  • 172
  • 341
  • Could you just say that the ratio of uncovered:covered code is not allowed to drop below the current value rather than attempting to discover what is new code? I think determining what is new code is "hard" as an added file might just be a file being renamed in SVN (which is a delete and add). – Stephen Paulger Mar 25 '11 at 17:22
  • You can do that trivially with any test coverage tool that will give you a ratio (as ours do). But I don't see the point: you'll just get programmers gaming the system, and when thresholds get too low, they'll go write tests for some old simple code that already works to get the ratio up, rather then the new code they just submitted. – Ira Baxter Mar 25 '11 at 17:56
  • It's a shame the Smart Differencer tools don't support python. – Stephen Paulger Mar 25 '11 at 18:02
  • @StephenPaulger: It does. Python 2.6 and 3.1, other dialects as we see demand. I think you are telling me the website is not up-to-date; we have enough products so that this is a bit of an issue :-} – Ira Baxter Mar 25 '11 at 18:18
  • Oh cool, not sure how I missed that. Time for another eye-test clearly. – Stephen Paulger Mar 25 '11 at 18:19
  • @StephenPaulger: Perhaps the real shame is we don't have test coverage tools for Python :-{ We can build them, just don't see the market at this point. – Ira Baxter Mar 25 '11 at 18:26
0

if you use maven - cobertura plugin can be a good choice ( and not so annoying for developers as svn hook ) http://mojo.codehaus.org/cobertura-maven-plugin/usage.html

Dmytro
  • 195
  • 5
  • I can picture the devs skipping that very easy and often :S – Nicolas Rodríguez Seara Mar 02 '11 at 22:14
  • 1
    Usually it's just ok to say- "folks-please run cobertura before commit and don't forgot to write test". Normal developers can stop do crappy commits if you ask them :) At least this approach worked in several teams that i know. – Dmytro Mar 02 '11 at 22:25
  • Although devs could skip maven tests very easy, if you have a Jenkins (or another CI tool) well configured to run tests before build the software, even if the developers insist on skipping the tests, the CI will warn the team that the build doesn't work as expected. – Miere Dec 18 '12 at 16:28