0

If you have 100% test coverage and all tests pass does that mean that the code is guaranteed to be correct and writing more tests is pointless?

Absolute1802
  • 53
  • 1
  • 8

2 Answers2

6

It's only correct as far as the logic of your testing is correct.

I'll give the most stupid example possible...

If I for example have a class (Java):

public class Example {

    public int timesTwo(int x){
        return x*2;
    }

}

with the following test (which you can see it being illogical and useless)

public class ExampleTest {

    @Test
    public void timesTwo() {
        new Example().timesTwo(5);
        assertTrue(true);
    }
    
}

Most coverage tools will say that this class has been tested 100%! So no, coverage is a good indicator of what needs to be tested; but the test logic itself isn't assured.

Mark Seemann
  • 225,310
  • 48
  • 427
  • 736
0

If you have 100% test coverage and all tests pass does that mean that the code is guaranteed to be correct and writing more tests is pointless?

No. Counter-example: For a division-function a testcase with 0 as divisor can be missing (example taken from here: https://stackoverflow.com/a/555824/3905529 ). The tests verify that the function is working with the inputs used in the unit-tests. But there may be edge-cases where the code throws exceptions and there is maybe no unit-test which asserts exactly this, even with a coverage of 100%.

Another problem I have often seen is that for a condition the if-block will be handled and tested but else-block is missing and there is also no test that verifies that the else-case will be handled in the function. This is else-part may be missing completely. So the test is green, but the function has semantical flaws due to missing code. But the coverage can still be 100% because there are only tests for the existing code, but not for the missing parts of it.

Beside this there may be other issues, see: https://sqa.stackexchange.com/a/36940

anion
  • 1,516
  • 1
  • 21
  • 35