3

If you have nothing, you cannot write a test because there is nothing to test. This seems pretty obvious to me but never seems to be addressed by proponents of TDD.

In order to write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. That is what comes first, not the test.

Tests can never come first. The thing that comes first is the design which specifies what classes and methods are going to exist.

Mark Seemann
  • 225,310
  • 48
  • 427
  • 736
Gungwald
  • 109
  • 1
  • 6

3 Answers3

2

It's the other way round.

If you write a test that calls a function which does not exist, your test suite fails and you get an error forcing you to define that function, just like writing any other test forces you to write the implementation.

Your tests don't need to run to be good tests. But this kind of test is not meant to stay in your test suite. They are sometimes referred to as "staircase tests": you need to write them to get going but they are only instrumental.

What happens generally is that as soon as this test passes, you make it fail by being more specific. Technically the test you end up with is the same you would have written after the fact and it didn't take more time to write it, but during this process you were able to run the test suite one or more times, so you're spending less time in an invalid state, so to speak.

I would like to add that there is nothing untrue in your question, but your conclusion doesn't follow the premise: it is true that what come first is the specification, but there is nothing inconsistent with formalising this specification in a test before the code is written. the spec, and the tests, force you to write the code. TDD is an incremental way of formalising the spec that ensures the spec always comes first.

geoffrey
  • 2,080
  • 9
  • 13
  • You are saying the specification comes first and that is exactly my point. You have to have a spec before you can have a test that tests the spec. I don't care if you formalize it or not. You have to at least think through the spec and decided what it is going to be before you can test. Otherwise there is nothing to test. And you'd better put some real thought into that spec or you're going to end up rewriting the tests and the implementation if later you find that your spec was no good. It is a tremendously important step to skip over. The spec comes first, then the test. – Gungwald Mar 18 '21 at 17:07
  • 1
    The tests don't test the spec, they test the implementation. Some even say that the tests are the spec. After all, what do you check before you ship? Especially in an everchanging and adapting environment. Automated tests are an always up to date spec. If they weren't, they would be testing the wrong implementation. Maybe read Robert C. Martin. I think he explains TDD better than what you are reading. TDD doesn't mean you don't have use cases, it doesn't mean you don't do design and it doesn't mean you don't think your algorithms through. Those who say otherwise do it a great disservice. – geoffrey Mar 18 '21 at 17:31
  • You said, "If you write a test that calls a function". If you called a function, you must have passed some parameters to it and expected a return type. If you're function doesn't return anything then you have nothing to verify in your test. So, before you can write the test, you must have decided on a function name, with parameters and a return value. Therefore, the function declaration, signature, interface, spec, or whatever you want to call it, has to come first. The test does not come first. – Gungwald Mar 29 '21 at 07:56
  • If you want to test a function `foo :: Number -> Number`, you first need to write `foo()` in your test, your compiler yells at you because there is no such function, you import it, it yells at you because the file doesn't exists, you create the file, it yells at you because the file doesn't have an export named `foo`, you create a void function `foo`. Now your `foo()` test passes. This process forced you to create the function. Now that you have a function `foo`, you make you test expect a number, your test fails because `foo` is void, you change `foo` accordingly, and so on and so forth. – geoffrey Mar 29 '21 at 08:33
  • Note that you should have your environment set up so that your tests selectively run on file save. You shouldn't have to press a test button and run the whole test suite each time unless you decide it. Also, compile errors and runtime errors count as failing test. The experience can be very snappy. Try it and you will find that you were doing it wrong. Expecting the API first in your test is closer to your philosophy than writing it first and then test it. – geoffrey Mar 29 '21 at 08:39
  • You knew up front, before the test, that you wanted to test `foo :: Number -> Number`. That's what I'm saying. You have to have that function signature before you start the test. It doesn't matter if you've coded it or it's just in your head. Your example shows it, but somehow you're not realizing that you're doing it. I'm trying to get you to realize that you're doing it, instead of just repeating what someone else told you. – Gungwald Mar 29 '21 at 14:38
  • When I said "coded it" I meant just in terms of providing a declaration, not an implementation. For example, in Java you would write: `Number foo(Number n) {return null;}` or in C you would write: `Number foo(Number n);`. And I'm saying you don't necessarily have to go this far. It might just be in your head, as in your example. But you have to know what it is before the test makes any sense. – Gungwald Mar 29 '21 at 14:49
  • I see what you mean but I don't think it's as clear cut: you can allow yourself to explore a problem with TDD while not being completely clueless about what you are doing. Developers' assumptions about problems are often wrong, so when you do upfront design there is a good chance that your spec is imperfect. On the other hand, when you explore with a REPL/debugger, what you are really doing is write a bunch of short lived tests. It's a bit of a waste. If you refactor your tests as you go, updating them is not so bad, and if you have a good discipline changing your design is also not that bad. – geoffrey Mar 29 '21 at 15:41
2

To write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. THAT is what comes first, NOT the test. Tests can NEVER come first. The thing that comes first is the design which specifies what classes and methods are going to exist.

Not quite right (not entirely wrong either - It's Complicated[tm])

If you look at the first example in Test Driven Development by Example, you'll see that Beck doesn't begin with classes and methods. He doesn't even begin with a test.

The very first thing that he creates is a "to-do" list, where each of the entries in the todo list is a representation of a behavior (my terminology, not his). So we see things like

$5 + 10 CHF = $10 if rate is 2:1

These days, you'd be more likely to see this idea expressed as Hoare triple (Given/When/Then, Arrange/Act/Assert, etc). But what we have here is a reminder to the programmer that we want an automated check that measures the result of adding two different currencies together, and confirms that the result matches some specification.

In his exercise, his to do list includes a "simpler" test, which is the one he attempts first

$5 * 2 = $10

That same todo list also includes some other concerns the has about the design, NOT expressed in test form. Also, the list grows as he works through the problem.

In this sense, the test absolutely comes first. We write the test in a language to be consumed by humans. Translating the test into a language understood by the machine comes later.


In the second step, where we describe the test to the machine, things get messier. It is absolutely the case that, as we are designing the test, we are also designing the communication protocol that allows the test to measure what the production code does. So there's a certain amount of communication design that is happening in parallel with the "test" design.

But even here, the test is not specifying all of the classes that are going to exist, it's only specifying what it needs to perform its measurement. We describe a facade, but we aren't specifying what lies beyond that facade.

It can happen, as we design more of the system, that the facade we specify is used only by tests, as a way of communicating with a different underlying design of production code.

(Note: I say classes here for consistency with the question and with early literature, taken primarily from examples in Smalltalk or Java. Feel free to substitute "functions" for "classes" if that makes you more comforatble.)

Now, the most common case is that the facade is the production code; we don't typically add elements to the design until we have a non-speculative motivation for them.


"Unit testing" puts some strain on these ideas - how can you possibly write a unit test without first designing your unit boundaries?

The real answer is an unfortunate one -- Kent Beck didn't write unit tests. He wrote "programmer tests" (a term that got retconned in later) and called them unit tests.

Using the testing language of the 1990s (which is when all this mess started), a more appropriate term is probably "composite tests".

You've also got "the London School", that was trying to figure out how to TDD a particular design style; writing a test for that style requires a more complicated testing facade "up front" (roles and interfaces and stable substitute implementations and so on).


It can also be worth keeping in mind the setting.

(Disclaimer: this isn't something I witnessed first hand - think "based on a true story" rather than "facts")

TDD (and its parent idea "test first" programming in XP) are pushing back against "up front design" of the sort where you decide what the class hierarchy and relationships should be, and document them, before you actually sit down to write the code.

The core argument being that the design process needs shorter feedback loops; that we don't get deeply committed to a particular design until we've acquired a lot of evidence that it is going to work out OK.


All that said, yes it is absolutely the case that TDD, as a technique, works so much better in the hands of someone who is already good at software design. See Michael Feathers on the Look, Ma, no hands! era.

There is no magic.

VoiceOfUnreason
  • 52,766
  • 5
  • 49
  • 91
  • If you write the test in a language of humans and then translate it to code later, you still have to have the thing that you're testing defined before you can know how to write the test. The specification always comes first, whether it is natural language or code. You may be physically writing a test, but you are mentally deciding what the object that you're testing has to look like (the specification) before you can test it. The specification always comes first. Otherwise there is nothing to test. – Gungwald Mar 18 '21 at 17:16
  • 1
    @Gungwald could you elaborate on what makes you think that because the spec comes first, then the implementation should come first ? Specifying an API via tests is exactly how TDD works: it's wishful thinking until you make the test pass : you write what you want the API to look like in your test, then you write the implementation which allows that test to be written. I see no inconsistency with your ideal. The only difference with a full spec is that tests are written incrementally because if you take too big of a leap some tests may not be written and your test suite may allow bugs to pass. – geoffrey Mar 18 '21 at 17:56
  • I did not say that the implementation should come first, only the spec. You have to know the name of the function, its parameters, and its return type before you can write a test that verifies its correctness. You must design a function's interface, signature, declaration, spec or whatever you want to call it, before you can write the test. The test does not come first. You must decide on the function's interface first, i.e. "the spec". – Gungwald Mar 29 '21 at 08:17
2

It's true that in order to write a test, the test writer must form some conception on how the test code can interact with the System Under Test. In that sense, conceptual design 'comes first'.

Test-driven development (TDD), however, is valuable because it's not (only) a quality assurance methodology. It's first and foremost a fast feedback loop.

While you may have an initial design in mind, once you start to write a test, you may discover that this design doesn't work (or is awkward to use). This often happens, and should cause you to immediately adjust course.

The red-green-refactor cycle suggests a model to think of TDD. Each such cycle may be a minute or two.

Thus, you may start with an initial design in mind, but then adjust it (or completely rethink it) every other minute.

never seems to be addressed by proponents of TDD

I disagree. Plenty of introductions to TDD discuss this. Two good books that discuss this (and much more) are Kent Beck's Test-Driven Development by Example and Nat Pryce and Steve Freeman's Growing Object-Oriented Code Guided by Tests.

Mark Seemann
  • 225,310
  • 48
  • 427
  • 736
  • I'm not arguing against the TDD methodology, just the obviously incorrect idea that tests come first. As you said, the conceptual design comes first. That's close enough for me. But, what you really need before the test, is the name of the function, its parameters, and its return value. You must design the function interface or you can't even finish the part of the test that calls the function and verifies its return value. – Gungwald Mar 29 '21 at 08:07
  • @Gungwald This might partially depend on the language in question. I don't usually do this, but I have, at times, written a test that was basically 'wishful thinking'. Such a test would initially express what I'd like client code to look like. Subsequently, I'd massage it until it was valid code. The SUT's API sort of falls out of that process. It's rare, but the conceptual design can initially be so vague that it looks as though the test existed before the design... – Mark Seemann Mar 29 '21 at 11:06
  • Sure sometimes you have to start with a guess. – Gungwald Mar 29 '21 at 14:28