21

I tried looking through all the pages about unit tests and could not find this question. If this is a duplicate, please let me know and I will delete it.

I was recently tasked to help implement unit testing at my company. I realized that I could unit test all the Oracle PL/SQL code, Java code, HTML, JavaScript, XML, XSLT, and more.

Is there such a thing as too much unit testing? Should I write unit tests for everything above or is that overkill?

Ascalonian
  • 14,409
  • 18
  • 71
  • 103
  • 3
    Unit testing is like cheese. You can have too much. – skaffman Jul 07 '09 at 19:46
  • 4
    Yes, but I don't need the downvotes :) I've had refactorings that should have taken a few hours take weeks because of all the unit tests. – Bill K Jul 07 '09 at 19:49
  • 1
    http://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests – Johnno Nolan Jul 07 '09 at 19:50
  • You can never have enough cheese! Cheese is just too awesome for that! I do agree with the "too much" for unit testing though. – DeadHead Jul 07 '09 at 19:51
  • 4
    By the way, after working with both I'd say that unit tests are much more necessary in a dynamic language than with a static. Static languages do much of what unit testing does at compile time... Again, unpopular opinion (esp. around the dynamic language types) – Bill K Jul 07 '09 at 19:51
  • Probably because we don't really get thrown by type issues all that often. I've really never understood what all the hubbub is all about. If data has so many hands touching it you can't even be certain what types you'll end up with, that's an architecture problem. – Erik Reppen Aug 30 '12 at 01:49
  • 1
    I would love to have the "problem" of too many unit tests. It's certainly easier to deal with too many unit test then it is the real problem of too few unit test. Worst case when dealing with too many tests: just make your change, double-check tests that fail, then fix them or delete them. Who knows, that ugly test that failed might reveal a mistake you made. – Buttons840 Jan 10 '14 at 00:18

14 Answers14

20

This depends on the project and its tolerance for failure. There is no single answer. If you can risk a bug, then don't test everything.

When you have tons of tests, it is also likely you will have bugs in your tests. Adding to your headaches.

test what needs testing, leave what does not which often leaves the fairly simple stuff.

Aiden Bell
  • 28,212
  • 4
  • 75
  • 119
  • 2
    +1 to bugs in your tests - they're just as likely as bugs in the code. – Mark Ransom Jul 08 '09 at 18:47
  • 10
    Bugs in our tests? Perhaps we should write unit tests for our tests. – CiscoIPPhone Jul 08 '09 at 19:08
  • 1
    @CiscoIPPhone ... what about those tests? They might have bugs ;) What we need is some meta-test-testing framework that will test test test integrity. – Aiden Bell Jul 08 '09 at 19:15
  • Perhaps an AI that will read your code and then design tests against it? :-) – Ascalonian Jul 09 '09 at 11:20
  • 1
    Tests are NOT just as likely to have bugs in as your product code. This is because of the part of the TDD discipline that asserts you should run tests to watch them fail before you implement the code that makes them pass. Because of this, the product code itself forms a test for your tests. This is a crucial aspect of TDD. Without this, and the other strict parts of the discipline, you're not doing TDD, you're just "writing tests", or maybe "writing tests first." – Jonathan Hartley Jun 14 '16 at 17:27
11

Is there such as thing as too much unit testing?

Sure. The problem is finding the right balance between enough unit testing to cover the important areas of functionality, and focusing effort on creating new value for your customers in the terms of system functionality.

Unit testing code vs. leaving code uncovered by tests both have a cost.

The cost of excluding code from unit testing may include (but aren't limited to):

  • Increased development time due to fixing issues you can't automatically test
  • Fixing problems discovered during QA testing
  • Fixing problems discovered when the code reaches your customers
  • Loss of revenue due to customer dissatisfaction with defects that made it through testing

The costs of writing a unit test include (but aren't limited to):

  • Writing the original unit test
  • Maintaining the unit test as your system evolves
  • Refining the unit test to cover more conditions as you discover them in testing or production
  • Refactoring unit tests as the underlying code under test is refactored
  • Lost revenue when it takes longer for you application to reach enter the market
  • The opportunity cost of implementing features that could drive sales

You have to make your best judgement about what these costs are likely to be, and what your tolerance is for absorbing such costs.

In general, unit testing costs are mostly absorbed during the development phase of a system - and somewhat during it's maintenance. If you spend too much time writing unit tests you may miss a valuable window of opportunity to get your product to market. This could cost you sales or even long-term revenue if you operate in a competitive industry.

The cost of defects is absorbed during the entire lifetime of your system in production - up until the point the defect is corrected. And potentially, even beyond that, if they defect is significant enough that it affects your company's reputation or market position.

Travis Hohl
  • 2,176
  • 2
  • 14
  • 15
LBushkin
  • 129,300
  • 32
  • 216
  • 265
  • 1
    Good answer. I might add that most systems (or modules) colloquially spend much more time in maintenance than initial development. If you take this approach I think it also makes sense to weigh in not only the costs, but also the benefits of testing and benefits of not testing. This will give you a bit more confidence in your decision. – cwash Jul 08 '09 at 19:19
  • Advocates of unit tests would argue that writing them doesn't take longer to reach the market. Not writing them does. Similarly for many of the costs on your unit tests side. If writing them actually makes development faster, not slower (which I believe it does) then many of these costs shift up into the 'costs of *not* writing unit tests' category. – Jonathan Hartley Jun 14 '16 at 17:32
  • Unit tests should almost never be refactored. They should be tiny and so easy to write that you will generally just delete a swathe of them and write new ones whenever you make a change to existing code. – Jonathan Hartley Jun 14 '16 at 17:34
6

Kent Beck of JUnit and JUnitMax fame answered a similar question of mine. The question has slightly different semantics but the answer is definitely relevant

Community
  • 1
  • 1
Johnno Nolan
  • 29,228
  • 19
  • 111
  • 160
3

The purpose of Unit tests is generally to make it possibly to refector or change with greater assurance that you did not break anything. If a change is scary because you do not know if you will break anything, you probably need to add a test. If a change is tedious because it will break a lot of tests, you probably have too many test (or too fragile a test).

The most obvious case is the UI. What makes a UI look good is something that is hard to test, and using a master example tends to be fragile. So the layer of the UI involving the look of something tends not to be tested.

The other times it might not be worth it is if the test is very hard to write and the safety it gives is minimal.

For HTML I tended to check that the data I wanted was there (using XPath queries), but did not test the entire HTML. Similarly for XSLT and XML. In JavaScript, when I could I tested libraries but left the main page alone (except that I moved most code into libraries). If the JavaScript is particularly complicated I would test more. For databases I would look into testing stored procedures and possibly views; the rest is more declarative.

However, in your case first start with the stuff that worries you the most or is about to change, especially if it is not too difficult to test. Check the book Working Effectively with Legacy Code for more help.

Kathy Van Stone
  • 25,531
  • 3
  • 32
  • 40
  • Why is unit-testing better for this purpose than higher-level testing (i.e. integration and/or system testing)? – ChrisW Jul 07 '09 at 20:15
  • Unit testing is good for checking a lot of paths without as much combinatoral blowup as you get from higher-level testing. Higher-level testing, however, is necessary as well to check that the system works together and with systems such as databases. – Kathy Van Stone Jul 07 '09 at 20:51
  • I suggest that system tests don't vary when the implementation does, and therefore system tests are better for making it possible to refactor with greater assurance. The two chief benefits of unit tests, IMO, are: a) less debugging needed during integration testing, which is useful if and only if debugging during integration testing is relatively difficult or expensive; b) ability to test before integration, e.g. if the other components haven't been written yet. – ChrisW Jul 07 '09 at 21:40
  • 1
    If systems tests take too long to write or run they won't help with refactoring. In practice I rarely had to change unit tests that much with refactoring (outside of changes that the refactoring tools do automatically). I do take care not to tie the test too much to the implementation of a class a opposed to its behavior. – Kathy Van Stone Jul 07 '09 at 22:33
2

Yes, there is such a thing as too much unit testing. One example would be unit testing in a whitebox manner, such that you're effectively testing the specific implementation; such testing would effectively slow down progress and refactoring by requiring compliant code to need new unit tests (because the tests were dependent upon specific implementation details).

Paul Sonier
  • 38,903
  • 3
  • 77
  • 117
2

I suggest that in some situations you might want automated testing, but no 'unit' testing at all (Should one test internal implementation, or only test public behaviour?), and that any time spent writing unit tests would be better spent writing system tests.

Community
  • 1
  • 1
ChrisW
  • 54,973
  • 13
  • 116
  • 224
  • Not all automated tests are unit tests. Best to refer to it as "Developer Testing" – cwash Jul 08 '09 at 18:41
  • I dont't know; I'm running automated system tests, instead of unit tests ... and running them frequently/routinely, as part of daily development of new features (or, sometimes, of refactoring). The whole system test suite only takes several seconds to run; and I have my own copy of the system (including database, etc.) on which to test. – ChrisW Jul 08 '09 at 18:58
  • @ChrisW I think we're saying the same thing. Sounds like a valid developer test suite to me. I'm arguing that people refer to their automated tests (written and maintained by developers) as developer tests. Just because it runs using a unit testing framework doesn't make it a unit test. I have a blog entry that clarifies this more: http://cwash.org/2009/02/17/dont-unit-test-anymore-no-really/ – cwash Jul 08 '09 at 19:15
  • I sure agree with you that there's more to testing than "unit tests", so it makes more sense to talk about "developer testing" because that's a more inclusive description of developers' testing. FWIW my tests aren't using a unit-testing framework (they're just a series of methods which invoke some input methods and then assert the output data); also, I don't know, maybe my tests are close to what you call "true V&V". – ChrisW Jul 08 '09 at 20:17
  • @ChrisW - Perhaps that is closer to what it is in practice, but I still think if it's maintained by the development team, it should be called a developer test. This is to remove any confusion around what validation and verification and quality control is there to do or who owns it. Other groups or levels of testing typically take place in larger projects, and I'd call the sum of them "true V&V". Thanks for your input... I'm interested to hear how your approach works out for you. – cwash Jul 09 '09 at 14:59
  • It works for me because a) my tests run quickly (some people e.g. http://stackoverflow.com/questions/1094413/is-there-such-a-thing-as-too-much-unit-testing/1094597#1094597 say that unit tests can't be replaced by the system tests because the system tests take too long to run), and b) it's no more expensive for me to fix bugs during integration than it would be to fix bugs when coding and testing 'units' (because I'm the only person involved in the integration). – ChrisW Jul 09 '09 at 15:42
1

While more tests is usually better (I have yet to be on a project that actually had too many tests), there's a point at which the ROI bottoms out, and you should move on. I'm assuming you have finite time to work on this project, by the way. ;)

Adding unit tests has some amount of diminishing returns -- after a certain point (Code Complete has some theories), you're better off spending your finite amount of time on something else. That may be more testing/quality activities like refactoring and code review, usability testing with real human users, etc., or it could be spent on other things like new features, or user experience polish.

ojrac
  • 13,231
  • 6
  • 37
  • 39
  • Good answer. I think that you can take a more fine grained view on ROI for testing a specific feature or piece of code. When you look at it like this, I think you get a better grasp on where the actual cutoff is for diminishing returns w/r/t testing. – cwash Jul 08 '09 at 18:46
1

As EJD said, you can't verify the absence of errors.

This means there are always more tests you could write. Any of these could be useful.

What you need to understand is that unit-testing (and other types of automated testing you use for development purposes) can help with development, but should never be viewed as a replacement for formal QA.

Some tests are much more valuable than others.

There are parts of your code that change a lot more frequently, are more prone to break, etc. These are the most economical tests.

You need to balance out the amount of testing you agree to take on as a developer. You can easily overburden yourself with unmaintainable tests. IMO, unmaintainable tests are worse than no tests because they:

  1. Turn others off from trying to maintain a test suite or write new tests.
  2. Detract from you adding new, meaningful functionality. If automated testing is not a net-positive result, you should ditch it like other engineering practices.

What should I test?

Test the "Happy Path" - this ensures that you get interactions right, and that things are wired together properly. But you don't adequately test a bridge by driving down it on a sunny day with no traffic.

Pragmatic Unit Testing recommends you use Right-BICEP to figure out what to test. "Right" for the happy path, then Boundary conditions, check any Inverse relationships, use another method (if it exists) to Cross-check results, force Error conditions, and finally take into account any Performance considerations that should be verified. I'd say if you are thinking about tests to write in this way, you're most likely figure out how to get to an adequate level of testing. You'll be able to figure out which ones are more useful and when. See the book for much more info.

Test at the right level

As others have mentioned, unit tests are not the only way to write automated tests. Other types of frameworks may be built off of unit tests, but provide mechanisms to do package level, system or integration tests. The best bang for the buck may be at a higher level, and just using unit testing to verify a single component's happy path.

Don't be discouraged

I'm painting a more grim picture here than I expect most developers will find in reality. The bottom line is that you make a commitment to learn how to write tests and write them well. But don't let fear of the unknown scare you into not writing any tests. Unlike production code, tests can be ditched and rewritten without many adverse effects.

cwash
  • 4,185
  • 5
  • 43
  • 53
0

Unit test any code that you think might change.

Matthew Groves
  • 25,181
  • 9
  • 71
  • 121
  • Alternatively, system-test functionality whose implementation might change: the *functionality* shouldn't change, so test that; but if you're throwing away the *implementation*, why unit-test the implementation? – ChrisW Jul 07 '09 at 20:07
0

You should only really write unit tests for any code which you have written yourself. There is no need to test the functionality inherently provided to you.

For example, If you've been given a library with an add function, you should not be testing that add(1,2) returns 3. Now if you've WRITTEN that code, then yes, you should be testing it.

Of course, whoever wrote the library may not have tested it and it may not work... in which case you should write it yourself or get a separate one with the same functionality.

Kurisu
  • 812
  • 10
  • 18
0

Well, you certainly shouldn't unit test everything, but at least the complicated tasks or those that will most likely contain errors/cases you haven't thought of.

schnaader
  • 49,103
  • 10
  • 104
  • 136
0

The point of unit testing is being able to run a quick set of tests to verify that your code is correct. This lets you verify that your code matches your specification and also lets you make changes and ensure that they don't break anything.

Use your judgement. You don't want to spend all of your time writing unit tests or you won't have any time to write actual code to test.

Sam DeFabbia-Kane
  • 2,599
  • 17
  • 11
0

When you've unit tested your unit tests, thinking you have then provided 200% coverage.

enginoid
  • 83
  • 1
  • 7
  • http://stackoverflow.com/questions/244345/how-do-you-unit-test-a-unit-test/1076159#1076159 – cwash Jul 08 '09 at 18:49
0

There is a development approach called test-driven development which essentially says that there is no such thing as too much (non-redundant) unit testing. That approach, however, is not a testing approach, but rather a design approach which relies on working code and a more or less complete unit test suite with tests which drive every single decision made about the codebase.

In a non-TDD situation, automated tests should exercise every line of code you write (in particular Branch coverage is good), but even then there are exceptions - you shouldn't be testing vendor-supplied platform or framework code unless you know for certain that there are bugs which will affect you in that platform. You shouldn't be testing thin wrappers (or, equally, if you need to test it, the wrapper is not thin). You should be testing all core business logic, and it is certainly helpful to have some set of tests that exercise your database at some elemental level, although those tests will never work in the common situation where unit tests are run every time you compile.

Specifically with regard to database testing is intrinsically slow, and depending on how much logic is held in your database, quite difficult to get right. Typically things like dbs, HTML/XML documents & templating, and other document-ish aspects of a program are verified moreso than tested. The difference is usually that testing tries to exercise execution paths whereas verification tries to verify inputs and outputs directly.

To learn more about this I would suggest reading up on "Code Coverage". There is a lot of material available if you're curious about this.

Mike Burton
  • 3,010
  • 24
  • 33
  • I'm not sure that TDD explicitly says there is no such thing as too much unit testing. I think more accurately it says that any production code that is introduced should be adequately covered... But not to the extent that you keep writing more and more tests and never get around to implementing any functionality. You use writing a test first to drive out writing more functionality. – cwash Jul 08 '09 at 18:44