2

I'm new to both django and unit-testing, and I'm trying to build unit-tests for my models but have been having some difficulty.

I have several models working closely together:

Resource which will maintain a file resource

MetadataField which represents a metadata field that can be added to resources, corresponds to a table full of fields

MetadataValue Matches MetadataField IDs with Resource IDs and a corresponding value, this is an intermediary table for the Resource - MetadataField many-to-many relationship

MetadataSchema represents a schema consisting of many MetadataFields. Each Resource is assigned a MetadataSchema which controls which MetadataFields it is represented by

Relationships:

Resource - MetadataField       : Many-to-Many through MetadataValue
MetadataValue - MetadataSchema : Many-to-Many
Resource - MetadataSchema      : One-to-Many

I'm not sure how to write tests to deal with these models. The model testing in the Test Driven Django tutorial seems to mostly cover initializing the objects and verifying attributes. If I do any setting up of these objects though it requires the use of all the others, so the tests will all be dependent on code that they're not meant to be testing. e.g. if I wish to create a resource, I should also be assigning it a metadata schema and values for fields in that schema.

I've looked around for good examples of unit tested models in django but haven't been able to find anything (the django website doesn't seem to have unittests, and these projects all either have poor/missing testing or in a couple cases have good testing but almost no models used.

Here are the possible approaches I see:

  • Doing a lot of Mocking, to ensure that I am only ever testing one class, and keep the unit tests on the models very simple, testing only their methods/attributes but not that the relationships are functioning correctly. Then rely on higher level integration tests to pick up any problem in the relationships etc.
  • Design unittests that DO rely on other functionality, and accept that a break in one function will break more than one test, provided it remains easy to see where the fault occurred. So i would perhaps have a method testing whether I can successfully add a MetadataValue to a resource, which would require setting up at least one MetadataSchema and Resource. I could then use a try - except block to ensure that if the test fails before the assertions dealing with what I'm actually meant to be testing, it gives a specific error message suggesting the fault lies elsewhere. This way I could quickly scan multiple failed test messages to find the real culprit. It wouldn't be possible to do this separation reliably in every test though

I'm having a hard time getting my head round this, so I don't know if this all makes sense, but if there are best practices for this sort of situation please point me to them! Thanks

Ciaran Phillips
  • 603
  • 1
  • 6
  • 9
  • 1
    Often if the unit tests are too hard to write it can be pointing out that the design is overly complex. For instance I'm not sure I see the value of MetaDataField? Couldn't each MetaDataValue table have a colum for Resource and Value. Then you would have a table for each type of metadata. Which is a bit clearer and not too outlandish. Depending on how many different types of metadata you predict you will have of course. – aychedee Dec 25 '12 at 12:32

3 Answers3

1

For me the purpose of Unit testing is to separate UNITS of code to test ONLY them, not worrying about all their dependencies. If I understand your idea correctly, You want to create something that is more an integration test (the relationship between two or more models) which is also a very helpful, but still a different, layer of testing :)

To test separate modules, especially when they use a lot of code around, I prefer to mock the dependencies. Google returned this as a first option for you Python mocks (I guess there are plenty of them out there).

The other thing is if there are TOO MANY dependencies You have to mock it probably means You have to rethink your architecture because of tight coupling :)

Good luck!

op1ekun
  • 1,918
  • 19
  • 26
  • Moreover I see You want to write tests for an already written code. Which is not bad (usually MORE TESTS is good). However it's good to write tests first (so called TestDrivenDevelopment), to spot the problems as early as possible. Usually it's hard to write tests for overcomplicated code ;) – op1ekun Dec 25 '12 at 15:34
1

You can use django fixtures to load data for testing, this can be very time consuming and hard to maintain if your models change a lot.

I suggest you to use a library like Factory Boy, which allows you to create objects on demand for your tests when you need them. You can set as many factories as you want, you can see some examples here and here you can also see some examples on mocking with the mocker library and a lot of tips on testing django apps.

Community
  • 1
  • 1
iferminm
  • 2,019
  • 19
  • 34
0

Use fixtures, they let you load model data without writing the code.

Thomas Orozco
  • 53,284
  • 11
  • 113
  • 116
  • But in some cases saving data from one model requires saving data in another (i.e. updating a resource, the metadata is stored in a separate table). So fixtures don't get around this (unless I'm thinking about things in entirely the wrong way - a strong possibility!) – Ciaran Phillips Dec 25 '12 at 11:11