7

I'm currently broadening my Unit Testing by utilising Mock objects (nSubsitute in this particular case). However I'm wondering what the current wisdom when creating a Mock objects. For instance, I'm working with an object that contains various routines to grab and process data - no biggie here but it will be utilised in a fair number of tests.

Should I create a shared function that returns the Mock Object with all the appropriate methods and behaviours mocked for pretty much most of the Testing project and call that object into my Unit Tests? Or shall I Mock the object into every Unit Test, only mocking the behaviour I need for that test (although there will be times I'll be mocking the same behaviour more than one occasion).

Thoughts or advice is gratefully received...

David Tchepak
  • 9,826
  • 2
  • 56
  • 68
SeanCocteau
  • 1,838
  • 19
  • 25

3 Answers3

10

I'm not sure if there is an agreed "current wisdom" on this, but here's my 2 cents.

First, as @codebox pointed out, re-creating your mocks for each unit test is a good idea, as you want your unit tests to run independently of each other. Doing otherwise can result in tests that pass when run together but fail when run in isolation (or vis versa). Creating mocks required for tests is commonly done in test setup ([SetUp] in NUnit, constructor in XUnit), so each test will get a newly created mock.

In terms of configuring these mocks, it depends on the situation and how you test. My preference is to configure them in each test with the minimum amount of configuration necessary. This is a good way of communicating exactly what that test requires of its dependencies. There is nothing wrong with some duplication in these cases.

If a number of tests require the same configuration, I would consider using a scenario-based test fixture (link disclaimer: shameless self-promotion). A scenario could be something like When_the_service_is_unavailable, and the setup for that scenario could configure the mocked service to throw an exception or return an error code. Each test then makes assertions based on that common configuration/scenario (e.g. should display error message, should send email to admin etc).

Another option if you have lots of duplicated bits of configuration is to use a Test Data Builder. This gives you reusable ways of configuring a number of different aspects of your mock or other any other test data.

Finally, if you're finding a large amount of configuration is required it might be worth considering changing the interface of the test dependency to be less "chatty". By looking for a valid abstraction that reduces the number of calls required by the class under test you'll have less to configure in your tests, and have a nice encapsulation of the responsibilities on which that class depends.

It is worth experimenting with a few different approaches and seeing what works for you. Any removal of duplication needs to be balanced with keeping each test case independent, simple, maintainable and reliable. If you find you have a large number of tests fail for small changes, or that you can't figure out the configuration an individual tests needs, or if tests fail depending on the order in which they are run, then you'll want to refine your approach.

David Tchepak
  • 9,826
  • 2
  • 56
  • 68
2

I would create new mocks for each test - if you re-use them you may get unexpected behaviour where the state of the mock from earlier tests affects the outcome of later tests.

codebox
  • 19,927
  • 9
  • 63
  • 81
  • 1
    That's true but hm, I believe the OP asks about re-using a mock implementation, rather than re-using a single instance of it. – Kos Aug 15 '12 at 23:36
1

It's hard to provide a general answer without looking at a specific case.

I'd stick with the same approach as I do everywhere else: first look at the tests as independent beings, then look for similarities and extract the common part out.

Your goal here is to follow DRY, so that your tests are maintainable in case the requirements change.

So...

  1. If it's obvious that every test in a group is going to use the same mock behaviour, provide it in your common set-up

  2. If each of them is significantly different, as in: the content of the mock constitutes a significant part of what you're testing and the test/mock relationship looks like 1:1, then it's reasonable to keep them close to the tests

  3. If the mocks differ between them, but only to some degree, you still want to avoid redundancy. A common SetUp won't help you, but you may want to introduce an utility like PrepareMock(args...) that will cover different cases. This will make your actual test methods free of repetitive set-up, but still let you introduce any degree of difference between them.

The tests look nice when you extract all similarities upwards (to a SetUp or helper methods) so that the only thing that remains in test methods is what's different between them.

Kos
  • 70,399
  • 25
  • 169
  • 233
  • advocating following DRY in tests is not something I would have done. http://stackoverflow.com/questions/6453235/what-does-damp-not-dry-mean-when-talking-about-unit-tests – Ivan Alagenchev Jun 25 '15 at 22:39