2

I'm developing a new project. This is what has been done until now:

  1. A technical design.
  2. The model classes (data classes).
  3. All the interfaces in the project (but no implementations yet).

Next thing I wanna do is implementing the methods from the skeleton (the high level methods) down to the nested objects. Nevertheless, I want to create a unit test for each method before I write the implementation. There won't be any problem to implement the high level methods first, because I'm going to work with interfaces and bind the concrete implementation only in an external Java configuration file using DI.

The first method I'm gonna implement is called lookForChanges() and it both accepts and returns void. This method is called by Spring's scheduler (@Scheduled), and it manages the whole process: it retrieves data from the DB, retrieves data from a web service, compares them, and if there were any changes then it updates the database and sends a JMS message to a client. Of course, it doesn't do all those things by itself but it calls the relevant classes and methods.

So the first problem I'd had is how to create a unit test to a void method. In all tutorials always the tested methods accept parameters and return a result. I've found an answer for that in this question. He says that even if there's no result to check, at least one can make sure that the methods inside the tested method were called and with the correct order of parameters.

I sort of liked this answer, but the problem is that I'm working TDD so in contast to the guy who asked this question, I'm writing the test before implementing the tested method, so I don't know yet which methods and in what order it will use. I can guess, but I will only be sure about that once the method will have been already implemented.

So, how can I test a void skeleton method before I implement it?

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Alon
  • 10,381
  • 23
  • 88
  • 152
  • 2
    What is the method supposed to do? That determines what you should test. Void methods are harder to test because you'd need to observe _side effects_ in the test code, which is considered somewhat an anti-pattern. So, what is `lookForChanges()` supposed to modify in the state of the class-under-test? – Mick Mnemonic Jun 02 '18 at 22:29
  • @MickMnemonic I've explained what it's supposed to do: interacting the DB layers to retrieve data, interacting the Web-Service layer to retrieve data, comparing the data from both places, if there's a difference between them then it interacts the DB layer to update the DB, and interacs the JMS layer to send a notification to a client. It does not change anything in the object state (which is named MainService). – Alon Jun 02 '18 at 22:37
  • Fair enough. The method is basically equivalent to `main()` and writing a unit test for it is certainly possible but will not add very much value as you'd be primarily testing interactions against mocks (DB, WS, JMS). Maybe the data comparison could be extracted into a separate, unit-testable method/class? – Mick Mnemonic Jun 02 '18 at 22:44
  • @MickMnemonic yes of course, this method will not do the comparison by itself but will call another method to do that. But how can I test that 'main' method? I mean what can I test if it does not return any value and I don't have the implementation written yet? If I did have the implementation written I could at least test that all the methods are called (using spies), in case that someone will remove any line. – Alon Jun 02 '18 at 22:53
  • I am assuming that you mean that you don't have the _implementation of the dependencies_ written yet. That's what you use mocks for; if your code uses DI and you code against interfaces, then this should be okay. I can try to explain this more in an answer. – Mick Mnemonic Jun 02 '18 at 22:56
  • @MickMnemonic no, I meant what I said. As I wrote in my question: "I want to create a unit test for each method before I write the implementation". I think that this is the recommended way nowadays to work TDD. This is the whole problem. Otherwise it would be a peace of cake. – Alon Jun 02 '18 at 23:03
  • Would it be possible for you to create the environment, perhaps simplistically, within your test? This is a bit more than looking for side-effects by creating the inputs and looking for the required effects. A simple database, an equally simple webservice and a client to receive the message - all of which you could control and observe to see the correct behavior. Things like this may grow into a test fixture for other methods as you build your tests. – Michael McKay Jun 03 '18 at 03:13

1 Answers1

4

So the first problem I'd had is how to create a unit test to a void method.

A void method implies collaborators. You verify those.

Example. Suppose we needed a task that would copy System.in to System.out. How would we write an automated test for that?

void copy() {
    // Does something clever with System.in and System.out
}

But if you squint a little bit, you'll see you really have code that looks like

void copy() {
    InputStream in = System.in;
    PrintStream out = System.out;
    // Does something clever with `in` and `out`
}

If we perform and extract method refactoring on this, then we might end up with code that looks like

void copy() {
    InputStream in = System.in;
    PrintStream out = System.out;
    copy(in, out);
}

void copy(InputStream in, PrintStream out) {
    // Does something clever with `in` and `out`
}

The latter of these is an API that we can test - we configure the collaborators, pass them to the system under test, and verify the changes afterwards.

We don't, at this point, have a test for void copy(), but that's OK, as the code there is "so simple that there are obviously no deficiencies".

Notice that, from the point of view of a test, there's not a lot of difference between the following designs

{
    Task task = new Task();
    task.copy(in, out);
}

{
    Task task = new Task(in, out);
    task.copy();
}

{
    Task task = Task.createTask();
    task.copy(in, out)
}

{
    Task task = Task.createTask(in, out);
    task.copy();
}

A way of thinking about this is: we don't write the API first, we write the test first.

// Arrange the test context to be in the correct initial state
// ???
// Verify that the test context arrived in final state consistent with the specification.

Which is to say, before you start thinking about the API, you first need to work out how you are going to evaluate the result.

Same idea, different spelling: if the effects of the function call are undetectable, then you might as well just ship a no-op. If a no-op doesn't meet your requirements, then there must be an observable effect somewhere -- you just need to work out whether that effect is observed directly (inspecting a return value), or by proxy (inspecting the effect on some other element in the solution, or a test double playing the role of that element).

OK so now I can pass params to the method for testing it, but what can I test?

You test what it is supposed to do.

Try this thought experiment - suppose you and I were pairing, and you proposed this interface

interface Task {
    void lookForChanges();
}

and then, after some careful thought, I implemented this:

class NoOpTask implements Task {
    @Override
    void lookForChanges() {}
}

How would you demonstrate that my implementation doesn't satisfy the requirements?

What you wrote in the question was "it updates the database and sends a JMS message to a client", so there are are two assertions to consider - did the database get updated, and was a JMS message sent?

The whole thing looks something like this

Given:
    A database with data `A`
    A webservice with data `B`
    A JMS client with no messages

When:
    The task is connected to this database, webservice, and JMS client
    and the task is run

Then:
    The database is updated with data `B`
    The JMS client has a message.

It looks like what you suggest is an end-to-end test.

It does look like one. But if you use test doubles for these collaborators, rather than live systems, then your test is running in an isolated and deterministic shell.

It's probably a sociable test - the test doesn't know or care about the implementation details of the system under test in the when clause. I make no claims that the SUT is one-and-exactly-one "unit".

I have to see the implementation of foo first. Am I wrong?

Yes - you need to understand the specification of foo, not the implementation.

VoiceOfUnreason
  • 52,766
  • 5
  • 49
  • 91
  • Thank you for the elaborated answer. I liked this approach and I will consider changing my design to work that way. However, I fear it does not solve the main problem which is that my lookForChanges() method **returns** void. OK so now I can pass params to the method for testing it, but what can I test? It doesn't return any value and I don't know yet which methods it's going to call (I can guess but it's not 100% sure before I write the implementation). – Alon Jun 03 '18 at 18:27
  • Thank you again for elaborating. It looks like what you suggest is an end-to-end test. Imagine I have two methods: foo() and bar(). foo uses bar. now bar has been corrupted. I still want foo to pass the test. It can be done by providing a mock of bar in the unit test of foo. Then it will be a unit test, but for doing so, I have to see the implementation of foo first. Am I wrong? – Alon Jun 04 '18 at 02:06