So the first problem I'd had is how to create a unit test to a void method.
A void method implies collaborators. You verify those.
Example. Suppose we needed a task that would copy System.in
to System.out
. How would we write an automated test for that?
void copy() {
// Does something clever with System.in and System.out
}
But if you squint a little bit, you'll see you really have code that looks like
void copy() {
InputStream in = System.in;
PrintStream out = System.out;
// Does something clever with `in` and `out`
}
If we perform and extract method refactoring on this, then we might end up with code that looks like
void copy() {
InputStream in = System.in;
PrintStream out = System.out;
copy(in, out);
}
void copy(InputStream in, PrintStream out) {
// Does something clever with `in` and `out`
}
The latter of these is an API that we can test - we configure the collaborators, pass them to the system under test, and verify the changes afterwards.
We don't, at this point, have a test for void copy()
, but that's OK, as the code there is "so simple that there are obviously no deficiencies".
Notice that, from the point of view of a test, there's not a lot of difference between the following designs
{
Task task = new Task();
task.copy(in, out);
}
{
Task task = new Task(in, out);
task.copy();
}
{
Task task = Task.createTask();
task.copy(in, out)
}
{
Task task = Task.createTask(in, out);
task.copy();
}
A way of thinking about this is: we don't write the API first, we write the test first.
// Arrange the test context to be in the correct initial state
// ???
// Verify that the test context arrived in final state consistent with the specification.
Which is to say, before you start thinking about the API, you first need to work out how you are going to evaluate the result.
Same idea, different spelling: if the effects of the function call are undetectable, then you might as well just ship a no-op. If a no-op doesn't meet your requirements, then there must be an observable effect somewhere -- you just need to work out whether that effect is observed directly (inspecting a return value), or by proxy (inspecting the effect on some other element in the solution, or a test double playing the role of that element).
OK so now I can pass params to the method for testing it, but what can I test?
You test what it is supposed to do.
Try this thought experiment - suppose you and I were pairing, and you proposed this interface
interface Task {
void lookForChanges();
}
and then, after some careful thought, I implemented this:
class NoOpTask implements Task {
@Override
void lookForChanges() {}
}
How would you demonstrate that my implementation doesn't satisfy the requirements?
What you wrote in the question was "it updates the database and sends a JMS message to a client", so there are are two assertions to consider - did the database get updated, and was a JMS message sent?
The whole thing looks something like this
Given:
A database with data `A`
A webservice with data `B`
A JMS client with no messages
When:
The task is connected to this database, webservice, and JMS client
and the task is run
Then:
The database is updated with data `B`
The JMS client has a message.
It looks like what you suggest is an end-to-end test.
It does look like one. But if you use test doubles for these collaborators, rather than live systems, then your test is running in an isolated and deterministic shell.
It's probably a sociable test - the test doesn't know or care about the implementation details of the system under test in the when clause. I make no claims that the SUT is one-and-exactly-one "unit".
I have to see the implementation of foo first. Am I wrong?
Yes - you need to understand the specification of foo, not the implementation.