1

I don't know so much about Test-Driven Development (TDD), but I always hear that i need to start the development with some test cases. Then, I need to make this tests pass with the most simple solution. And then create more tests to make my tests fail again...

But the question is: When stop creating new tests? When I know that my application is in agreement with the requirements?

Pedro Ghilardi
  • 1,913
  • 1
  • 17
  • 19

7 Answers7

9

Shamelessly copying Kent Beck's answer to this question.

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Community
  • 1
  • 1
Matthew Vines
  • 27,253
  • 7
  • 76
  • 97
  • Yeah, I certainly don't need credit for this, but this really stuck with me, and I think it answers this question just as well. – Matthew Vines Jul 07 '09 at 20:16
  • Note that question was not about TDD, which is different. – John Saunders Jul 07 '09 at 20:18
  • That's true, but I think the same principles apply. The overall concept is to not write tests for the sake of tests, but to ensure proper functionality. Even when doing TDD. Write a test for a purpose, code the purpose, validate, and refactor. Don't write more tests to cover the same purpose unless you are worried you have missed an edge case. Good is good, but what good means will differ by developer and by team. Just give it some thought. – Matthew Vines Jul 07 '09 at 20:22
  • @John Saunders the question (which was mine) was written with tdd in mind. Although its content was agnostic of methodology. – Johnno Nolan Jul 07 '09 at 20:54
2

Code coverage tools can provide useful information about how well tested your code is. Such tools will identify code paths that have not been exercised by your tests.

Dan Dyer
  • 53,737
  • 19
  • 129
  • 165
2

In TDD, you stop writing tests when you stop writing code (or just so slightly before the last code is written), unless (as mentioned), your code coverage is too low.

stevedbrown
  • 8,862
  • 8
  • 43
  • 58
  • Yes, when you stop writing code you can stop testing it. Until you find that edge case you didn't think about, or something else changes. When your understanding of the problem changes, then you change the tests or add new ones (then change the code so the tests pass). – Hamish Smith Jul 07 '09 at 20:15
2

Lifecycle

If you follow Test Driven Development to the letter, you have a 5 step cycle:

  1. Write a test: for each unit (the smallest piece of code you can test) you write a test, where you determine what that unit will be responsible for. You need to follow the so called Right-BICEP checklist (right results, boundary conditions, inverse relationships, cross-check results, error conditions, performance characteristics).
  2. Run tests and see them fail: in this step the newly written tests should fail. This is the so called red step, as the unit tests should show up in red. If the tests do not fail, you probably didn't write them correct.
  3. Implement unit: write the code, even if you hardcode it, the point of this step is to get to the next green step.
  4. Run the tests and see them pass: the green step as all the test should pass. If they don't you're not done with writing code.
  5. Done? No, refactor!

TDD lifecycle - image from Wikipedia
(source: wikimedia.org)

What to test

  • Test all units until you reach complete code coverage (wishful thinking in most cases, you would have to have a unit test for severe fail scenarios like tripping over the power cable, no more disk space, flood etc). If you reach the 90% ballpark you're more than done.
  • If you find a bug in your code, create a unit test and fix the code. Repeat.
  • If your code has a GUI try any automated functional testing you can find. In my case Selenium or JMeter would do the trick. Selenium is a good tool as it allows you to record your tests with Firefox and them replay them on demand.

Continuous integration

Because running all the tests all the time is time consuming, you can delegate most of this mundane tasks to a continuous integration server that will do them for you at predefined time intervals. This does not mean that you do not have to run tests before you commit your code. You still need to run the tests for the part of the system you were fixing, if the system is large running all unit tests would be counterproductive. The CI server would inform you of any failures and you would need to buy drinks for all of your colleagues on top of fixing the code you broke ;)

Glorfindel
  • 21,988
  • 13
  • 81
  • 109
Miha Hribar
  • 5,776
  • 3
  • 26
  • 24
1

You stop writing tests when you have no more functionality to add to your code. There may be some additional edge cases you want to make sure are covered, but beyond that, when you don't have anything more to have your code do, you don't have any more TDD tests to write (Acceptance and QA tests are a different story).

Yishai
  • 90,445
  • 31
  • 189
  • 263
0

There are certain areas you may find difficult to test, such as gui and data access but apart from that you write test until you objectives are met.

Johnno Nolan
  • 29,228
  • 19
  • 111
  • 160
0

In an ideal world where I would follow eXtreme Programming practices, (not just TDD) my customer is supposed to provide me with some automated functional test. When such test goes green I stop writing tests and go to my customer to ask for some more functional tests that do not pass (because tests are specification and if my customer does not provide me with failing tests I won't know what to do)

I could explain it another way, aimed at a more prectical world. At XP France we organize TDD Dojo's on a regular basis (once a week). You could call taht TDD training sessions. There we use to practice TDD on some toy problems. When doing this the idea is to propose a test that fail, then write code to make it pass. Never propose a test that works without coding.

Whoever propose a test that goes green without any code should pay beers to others. So that's a way to know it's time to stop testing: when you are not able any more to write tests that fails you are finished. (Anyway coding after drinking is bad practice).

kriss
  • 23,497
  • 17
  • 97
  • 116