Before making this question, I read these questions and its answers:
- What is a reasonable code coverage % for unit tests (and why)?
- Is 100% code coverage a really good thing when doing unit tests?
- Unit testing code coverage - do you have 100% coverage?
and read other blogs too, like Martin Fowler - TestCoverage.
My conclusion - a resume of course - the community says to:
- not waste time, where time is money, to create tests just to get 100% code coverage.
- Maybe a magic number like 80% or 90% of coverared test can cover 99.99999% of functionalities. So, why do you want to wast time to accomplish 0.000001% of functionality?!
I agree with it. But I am worrying about giving to the developer the opportunity of not creating a test because he believes that is not important. I know we can avoiding these mistakes by making another person verify the code before publishing it.
The Question
Thinking in a way to control what the developer knows that should not be tested, will be a good practice to create a kind of //special comment
in the code to the developer explicity mark where does he know that it doesn't worth to be tested? Or would that be an irrelevant information that is messing up the code? Can someone suggest another idea to accomplish it?
Before reading any answer for this question, my opinion is that it is a good practice, since a third person could check, to agree or not, why that code was not covered by the developer.
java example:
public String encodeToUTF8(String value){
String encodedValue = null;
try {
encodedValue = URLEncoder.encode(value, "UTF-8");
}
catch (UnsupportedEncodingException ignore) {
// [coverage-not-need] this exception will never occur because UTF-8 is a supported encoding type
}
return encodedValue;
}
Terminology: 100% code coverage means cover all branches, not only all lines.