Do your unit tests constitute 100% code coverage? Yes or no, and why or why not.
-
6Can't say I've ever booked a flight to *really* test my code.. :-P – Nick Bedford Sep 25 '09 at 05:28
-
2You should check this question out: http://stackoverflow.com/questions/90002/what-is-a-reasonable-code-coverage-for-unit-tests-and-why/90021 – Jon Limjap Sep 25 '09 at 05:37
17 Answers
No for several reasons :
- It is really expensive to reach the 100% coverage, compared to the 90% or 95% for a benefit that is not obvious.
- Even with 100% of coverage, your code is not perfect. Take a look at this method (in fact it depends on which type of coverage you are talking about - branch coverage, line coverage...):
public static String foo(boolean someCondition) {
String bar = null;
if (someCondition) {
bar = "blabla";
}
return bar.trim();
}
and the unit test:
assertEquals("blabla", foo(true));
The test will succeed, and your code coverage is 100%. However, if you add another test:
assertEquals("blabla", foo(false));
then you will get a NullPointerException
. And as you were at 100% with the first test, you would have not necessarily write the second one!
Generally, I consider that the critical code must be covered at almost 100%, while the other code can be covered at 85-90%

- 178,213
- 47
- 333
- 501

- 79,475
- 49
- 202
- 273
-
16+1 for for stating that 100% code coverage does not imply a perfect test suite. You'd need 100% path coverage, which is exceedingly difficult (and impossible in many cases.) – Falaina Sep 25 '09 at 05:34
-
3You are talking about Function Coverage, a measure of whether all functions in the program are called during testing. I would expect this metric to be 100% in all cases; how could you trust a test suite that didn't call all of the functions in your code at least once? – Robert Harvey Sep 25 '09 at 06:31
-
7I'm not talking about function coverage here! In my example, the first unit test gives a 100% of *line* coverage not *function* coverage. However, as stated by Falaina, the *path* coverage is not 100% here (which is extremely hard to get), and that's why the second test will fail, even if I already get a 100% *line* coverage with the first test... – Romain Linsolas Sep 25 '09 at 06:51
-
6"It is really expensive to reach the 100% coverage, compared to the 90% or 95%" I don't agree, this 5% is hard to test because it is not well designed (because testable is part of the design). For the same reason this 5% untestable probably contain much more bugs than the remaining of the covered code, and I find it odd to not cover by tests the code that is mostly error-prone??! – Patrick from NDepend team Feb 19 '14 at 13:09
-
You've really proved why it's never a good idea to pre-assign a variable with a result: String bar = null. null should be assigned to bar in the else case, then 100% coverage can test for a null exception. – danwag Jan 27 '17 at 15:13
-
@romaintaz Well said. But how does a team define which code is critical? – user2954463 May 03 '17 at 17:05
To all the 90% coverage tester:
The problem with doing so is that the 10% hard to test code is also the not-trivial code that contains 90% of the bug! This is the conclusion I got empirically after many years of TDD.
And after all this is pretty straightforward conclusion. This 10% hard to test code, is hard to test because it reflect tricky business problem or tricky design flaw or both. These exact reasons that often leads to buggy code.
But also:
- 100% covered code that decreases with time to less than 100% covered often pinpoints a bug or at least a flaw.
- 100% covered code used in conjunction with contracts, is the ultimate weapon to lead to live close to bug-free code. Code Contracts and Automated Testing are pretty much the same thing
- When a bug is discovered in 100% covered code, it is easier to fix. Since the code responsible for the bug is already covered by tests, it shouldn't be hard to write new tests to cover the bug fix.

- 13,237
- 6
- 61
- 92
No, because there is a practical trade-off between perfect unit tests and actually finishing a project :)

- 63,995
- 54
- 186
- 268

- 344,730
- 71
- 640
- 635
It is seldom practical to get 100% code coverage in a non-trivial system. Most developers who write unit tests shoot for the mid to high 90's.
An automated testing tool like Pex can help increase code coverage. It works by searching for hard-to-find edge cases.

- 178,213
- 47
- 333
- 501
-
3The problem with doing so is that the 10% hard to test code is also the not-trivial code that contains 90% of the bug! This is the conclusion I got empirically after many years of TDD. – Patrick from NDepend team Feb 20 '11 at 10:11
Yes we do.
It depends on what language and framework you're using as to how easy that is to achieve though.
We're using Ruby on Rails for my current project. Ruby is very "mockable" in that you can stub/mock out large chunks of your code without having to build in overly complicated class composition and construction designs that you would have to do in other languages.
That said, we only have 100% line coverage (basically what rcov gives you). You still have to think about testing all the required branches.
This is only really possible if you include it from the start as part of your continuous integration build, and break the build if coverage drops below 100% - prompting developers to immediately fix it. Of course you could choose some other number as a target, but if you're starting fresh, there isn't much difference for the effort to get from 90% to 100%
We've also got a bunch of other metrics that break the build if they cross a given threshold as well (cyclomatic complexity, duplication for example) these all go together and help reinforce each other.
Again, you really have to have this stuff in place from the start to keep working at a strict level - either that or set some target you can hit, and gradually ratchet it up till you get to a level you're happy with.
Does doing this add value? I was skeptical at first, but I can honestly say that yes it does. Not primarily because you have thoroughly tested code (although that is definitely a benefit), but more in terms of writing simple code that is easy to test and reason about. If you know you have to have 100% test coverage, you stop writing overly complex if/else/while/try/catch monstrosities and Keep It Simple Stupid.

- 47,370
- 7
- 42
- 53
-
3"If you know you have to have 100% test coverage, you stop writing overly complex if/else/while/try/catch monstrosities" -- Very interesting point. – funroll Oct 16 '13 at 19:17
What I do when I get the chance is to insert statements on every branch of the code that can be grepped for and that record if they've been hit, so that I can do some sort of comparison to see which statements have not been hit. This is a bit of a chore, so I'm not always good about it.
I just built a small UI app to use in charity auctions, that uses MySQL as its DB. Since I really, really didn't want it to break in the middle of an auction, I tried something new.
Since it was in VC6 (C++ + MFC) I defined two macros:
#define TCOV ASSERT(FALSE)
#define _COV ASSERT(TRUE)
and then I sprinkled
TCOV;
throughout the code, on every separate path I could find, and in every routine.
Then I ran the program under the debugger, and every time it hit a TCOV
, it would halt.
I would look at the code for any obvious problems, and then edit it to _COV
, then continue. The code would recompile on the fly and move on to the next TCOV
.
In this way, I slowly, laboriously, eliminated enough TCOV
statements so it would run "normally".
After a while, I grepped the code for TCOV
, and that showed what code I had not tested. Then I went back and ran it again, making sure to test more branches I had not tried earlier.
I kept doing this until there were no TCOV
statements left in the code.
This took a few hours, but in the process I found and fixed several bugs. There is no way I could have had the discipline to make and follow a test plan that would have been that thorough. Not only did I know I had covered all branches, but it had made me look at every branch while it was running - a very good kind of code review.
So, whether or not you use a coverage tool, this is a good way to root out bugs that would otherwise lurk in the code until a more embarrasing time.

- 40,059
- 14
- 91
- 135
-
Is this something you came up with? Seems like the technique could do with a name. – funroll Oct 16 '13 at 19:38
-
@funroll: Name? I just think of it as coverage testing. Got any ideas? – Mike Dunlavey Oct 17 '13 at 20:35
-
I like this method, I am going to try to do this as a way of branch testing – Jamie S Oct 21 '15 at 14:24
I personally find 100% test coverage to be problematic on multiple levels. First and foremost, you have to make sure you are gaining a tangible, cost-saving benefit from the unit tests you write. In addition, unit tests, like any other code, are CODE. That means it, just like any other code, must be verified for correctness and maintained. That additional time verifying additional code for correctness, and maintaining it and keeping those tests valid in response to changes to business code, adds cost. Achieving 100% test coverage and ensuring you test you're code as thoroughly as possible is a laudable endeavor, but achieving it at any cost...well, is often too costly.
There are many times when covering error and validity checks that are in place to cover fringe or extremely rare, but definitely possible, exceptional cases are an example of code that does not necessarily need to be covered. The amount of time, effort (and ultimately money) that must be invested to achieve coverage of such rare fringe cases is often wasteful in light of other business needs. Properties are often a part of code that, especially with C# 3.0, do not need to be tested as most, if not all, properties behave exactly the same way, and are excessively simple (single-statement return or set.) Investing tremendous amounts of time wrapping unit tests around thousands of properties could quite likely be better invested somewhere else where a greater, more valuable tangible return on that investment can be realized.
Beyond simply achieving 100% test coverage, there are similar problems with trying to set up the "perfect" unit. Mocking frameworks have progressed to an amazing degree these days, and almost anything can be mocked (if you are willing to pay money, TypeMock can actually mock anything and everything, but it does cost a lot.) However, there are often times when dependencies of your code were not written in a mock-able way (this is actually a core problem with the vast bulk of the .NET framework itself.) Investing time to achieve the proper scope of a test is useful, but putting in excessive amounts of time to mock away everything and anything under the face of the sun, adding layers of abstraction and interfaces to make it possible, is again most often a waste of time, effort, and ultimately money.
The ultimate goal with testing shouldn't really be to achieve the ultimate in code coverage. The ultimate goal should be achieving the greatest value per unit time invested in writing unit tests, while covering as much as possible in that time. The best way to achieve this is to take the BDD approach: Specify your concerns, define your context, and verify the expected outcomes occur for any piece of behavior being developed (behavior...not unit.)

- 32,447
- 15
- 90
- 130
On a new project I practice TDD and maintain 100% line coverage. It mostly occurs naturally through TDD. Coverage gaps are usually worth the attention and are easily filled. If the coverage tool I'm using provided branch coverage or something else I'd pay attention to that, although I've never seen branch coverage tell me anything, probably because TDD got there first.
My strongest argument for maintaining 100% coverage (if you care about coverage at all) is that it's much easier to maintain 100% coverage than to manage less than 100% coverage. If you have 100% coverage and it drops, you immediately know why and can easily fix it, because the drop is in code you've just been working on. But if you settle for 95% or whatever, it's easy to miss coverage regressions and you're forever re-reviewing known gaps. It's the exact reason why current best practice requires one's test suite to pass completely. Anything less is harder, not easier, to manage.
My attitude is definitely bolstered by having worked in Ruby for some time, where there are excellent test frameworks and test doubles are easy. 100% coverage is also easy in Python. I might have to lower my standards in an environment with less amenable tools.
I would love to have the same standards on legacy projects, but I've never found it practical to bring a large application with mediocre coverage up to 100% coverage; I've had to settle for 95-99%. It's always been just too much work to go back and cover all the old code. This does not contradict my argument that it's easy to keep a codebase at 100%; it's much easier when you maintain that standard from the beginning.

- 36,475
- 10
- 98
- 121
No because I spent my time adding new features that help the users rather than tricky to write obscure tests that deliver little value. I say unit test the big things, subtle things and things that are fragile.

- 6,108
- 1
- 37
- 61
I generally write unit tests just as a regression-prevention method. When a bug is reported that I have to fix, I create a unit test to ensure that it doesn't re-surface in the future. I may create a few tests for sections of functionality I have to make sure stay intact (or for complex inter-part interactions), but I usually want for the bug fix to tell me one is necessary.

- 33,116
- 33
- 114
- 199
I usually manage to hit 93..100% with my coverage but I don't aim for 100% anymore. I used to do that and while it's doable, it's not worth the effort beyond a certain point because testing blindly obvious usually isn't needed. Good example of this could be the true evaluation branch of the following code snipped
public void method(boolean someBoolean) {
if (someBoolean) {
return;
} else {
/* do lots of stuff */
}
}
However what's important to achieve is to as close to 100% coverage on functional parts of the class as possible since those are the dangerous waters of your application, the misty bog of creeping bugs and undefined behaviour and of course the money-making flea circus.

- 29,022
- 11
- 55
- 82
-
Nope, it just to emphasize what I'm going after. In fact if this was production code, I wouldn't have added it there. – Esko Oct 07 '09 at 19:10
-
"it's not worth the effort beyond a certain point because testing blindly obvious usually isn't needed" -- blindingly obvious code would take literally seconds to write a test for, so "it's not worth the effort" is an argument that somewhat falls flat. – Adam Parkin Jul 26 '13 at 20:43
-
@Adam, do you write tests to specify functionality or to satisfy the coverage counter? – Esko Jul 26 '13 at 21:04
-
1@Esko Neither, I write tests to programmatically validate my intentions as the developer. Regardless of whether or not a method is trivial or complex, it's there to provide a specific piece of functionality, and the way I ensure the method does as intended is by writing a test. The thing that *helps* to inform me if I have in fact written a test or not is the coverage number. – Adam Parkin Jul 29 '13 at 20:19
From Ted Neward blog.
By this point in time, most developers have at least heard of, if not considered adoption of, the Masochistic Testing meme. Fellow NFJS'ers Stuart Halloway and Justin Gehtland have founded a consultancy firm, Relevance, that sets a high bar as a corporate cultural standard: 100% test coverage of their code.
Neal Ford has reported that ThoughtWorks makes similar statements, though it's my understanding that clients sometimes put accidental obstacles in their way of achieving said goal. It's ambitious, but as the ancient American Indian proverb is said to state,
If you aim your arrow at the sun, it will fly higher and farther than if you aim it at the ground.
Yes, I have had projects that have had 100% line coverage. See my answer to a similar question.
You can get 100% line coverage, but as others have pointed out here on SO and elsewhere on the internet its maybe only a minimum. When you consider path and branch coverage, there's a lot more work to do.
The other way of looking at it is to try to make your code so simple that its easy to get 100% line coverage.
In many cases it's not worth getting 100% statement coverage, but in some cases, it is worth it. In some cases 100% statement coverage is far too lax a requirement.
The key question to ask is, "what's the impact if the software fails (produces the wrong result)?". In most cases, the impact of a bug is relatively low. For example, maybe you have to go fix the code within a few days and rerun something. However, if the impact is "someone might die in 120 seconds", then that's a huge impact, and you should have a lot more test coverage than just 100% statement coverage.
I lead the Core Infrastructure Initiative Best Practices Badge for the Linux Foundation. We do have 100% statement coverage, but I wouldn't say it was strictly necessary. For a long time we were very close to 100%, and just decided to do that last little percent. We couldn't really justify the last few percent on engineering grounds, though; those last few percent were added purely as "pride of workmanship". I do get a very small extra piece of mind from having 100% coverage, but really it wasn't needed. We were over 90% statement coverage just from normal tests, and that was fine for our purposes. That said, we want the software to be rock-solid, and having 100% statement coverage has helped us get there. It's also easier to get 100% statement coverage today.
It's still useful to measure coverage, even if you don't need 100%. If your tests don't have decent coverage, you should be concerned. A bad test suite can have good statement coverage, but if you don't have good statement coverage, then by definition you have a bad test suite. How much you need is a trade-off: what are the risks (probability and impact) from the software that is totally untested? By definition it's more likely to have errors (you didn't test it!), but if you and your users can live with those risks (probability and impact), it's okay. For many lower-impact projects, I think 80%-90% statement coverage is okay, with better being better.
On the other hand, if people might die from errors in your software, then 100% statement coverage isn't enough. I would at least add branch coverage, and maybe more, to check on the quality of your tests. Standards like DO-178C (for airborne systems) take this approach - if a failure is minor, no big deal, but if a failure could be catastrophic, then much more rigorous testing is required. For example, DO-178C requires MC/DC coverage for the most critical software (the software that can quickly kill people if it makes a mistake). MC/DC is way more strenuous than statement coverage or even branch coverage.

- 511
- 6
- 4
I only have 100% coverage on new pieces of code that have been written with testability in mind. With proper encapsulation, each class and function can have functional unit tests that simultaneously give close to 100% coverage. It's then just a matter of adding some additional tests that cover some edge cases to get you to 100%.
You shouldn't write tests just to get coverage. You should be writing functional tests that test correctness/compliance. By a good functional specification that covers all grounds and a good software design, you can get good coverage for free.

- 137,716
- 26
- 137
- 190
There's a lot of good information here, I just wanted to add a few more benefits that I've found when aiming for 100% code coverage in the past
- It helps reduce code complexity
Since it is easier to remove a line than it is to write a test case, aiming for 100% coverage forces you to justify every line, every branch, every if statement, often leading you to discover a much simpler way to do things that requires fewer tests
- It helps develop good test granularity
You can achieve high test coverage by writing lots of small tests testing tiny bits of implementation as you go. This can be useful for tricky bits of logic but doing it for every piece of code no matter how trivial can be tedious, slow you down and become a real maintenance burden also making your code harder to refactor. On the other hand, it is very hard to achieve good test coverage with very high level end to end behavioural tests because typically the thing you are testing involves many components interacting in complicated ways and the permutations of possible cases become very large very quickly. Therefore if you are practical and also want to aim for 100% test coverage, you quickly learn to find a level of granularity for your tests where you can achieve a high level of coverage with a few good tests. You can achieve this by testing components at a level where they are simple enough that you can reasonably cover all the edge cases but also complicated enough that you can test meaningful behaviour. Such tests end up being simple, meaningful and useful for identifying and fixing bugs. I think this is a good skill and improves code quality and maintainability.

- 1,725
- 2
- 17
- 20
A while ago I did a little analysis of coverage in the JUnit implementation, code written and tested by, among others, Kent Beck and David Saff.
From the conclusions:
Applying line coverage to one of the best tested projects in the world, here is what we learned:
Carefully analyzing coverage of code affected by your pull request is more useful than monitoring overall coverage trends against thresholds.
It may be OK to lower your testing standards for deprecated code, but do not let this affect the rest of the code. If you use coverage thresholds on a continuous integration server, consider setting them differently for deprecated code.
There is no reason to have methods with more than 2-3 untested lines of code.
The usual suspects (simple code, dead code, bad weather behavior, …) correspond to around 5% of uncovered code.
In summary, should you monitor line coverage? Not all development teams do, and even in the JUnit project it does not seem to be a standard practice. However, if you want to be as good as the JUnit developers, there is no reason why your line coverage would be below 95%. And monitoring coverage is a simple first step to verify just that.

- 1
- 1

- 8,458
- 3
- 41
- 51