0

Before making this question, I read these questions and its answers:

and read other blogs too, like Martin Fowler - TestCoverage.

My conclusion - a resume of course - the community says to:

  • not waste time, where time is money, to create tests just to get 100% code coverage.
  • Maybe a magic number like 80% or 90% of coverared test can cover 99.99999% of functionalities. So, why do you want to wast time to accomplish 0.000001% of functionality?!

I agree with it. But I am worrying about giving to the developer the opportunity of not creating a test because he believes that is not important. I know we can avoiding these mistakes by making another person verify the code before publishing it.

The Question

Thinking in a way to control what the developer knows that should not be tested, will be a good practice to create a kind of //special comment in the code to the developer explicity mark where does he know that it doesn't worth to be tested? Or would that be an irrelevant information that is messing up the code? Can someone suggest another idea to accomplish it?

Before reading any answer for this question, my opinion is that it is a good practice, since a third person could check, to agree or not, why that code was not covered by the developer.

java example:

public String encodeToUTF8(String value){
    String encodedValue = null;

    try {
        encodedValue = URLEncoder.encode(value, "UTF-8");
    }
    catch (UnsupportedEncodingException ignore) {
        // [coverage-not-need] this exception will never occur because UTF-8 is a supported encoding type
    }
    return encodedValue;
}

Terminology: 100% code coverage means cover all branches, not only all lines.

Community
  • 1
  • 1
Paulo
  • 2,956
  • 3
  • 20
  • 30

1 Answers1

1

Most coverage tools have exactly that, a special comment where you can declare that this code will not have coverage. For example, Perl's Devel::Cover uses # uncoverable and Ruby's simplecov has # :nocov:

However, I would caution against the developer prematurely declaring things uncoverable, or relying on it too heavily. The developer who wrote the code can be blind to testing opportunities. And like any comment, it can fall out of date if the surrounding code changes. Used too much, it gives a false sense of confidence in your test coverage.

Use it as a last resort after you've done your testing, ran coverage, and determined that statement really is all but impossible to test. Again, I caution against using it as an excuse to paper over things which are simply too hard to test. Often that's indicative of a needed redesign rather than truly untestable code.


Your example code is a perfect case of a misuse of "uncoverable" code. If the exception can never happen, I have to wonder why there's a catch block there at all. As written if it does happen, it will be silenced and the user will be left wondering why they're getting a NullPointerException somewhere later in their code. Instead, there should be no try/catch block. The exception should be allowed to throw an error in the exceptional case where encoding fails.

public String encodeToUTF8(String value){
    return URLEncoder.encode(value, "UTF-8");
}

I'm not a Java programmer but I know it doesn't like unchecked exceptions. If Java requires you catch all possible exceptions (ugh) the catch should assert or rethrow; something that doesn't silence the exception.

And that's why you want to use an "uncoverable" marker very, very sparingly and only after much scrutiny. Examining uncovered code often leads to finding hidden bugs or poorly designed code.

Schwern
  • 153,029
  • 25
  • 195
  • 336