10

To the question Am I unit testing or integration testing? I have answered, a bit provocative: Do your test and let other people spend time with taxonomy.

For me the distinction between various levels of testing is technically pointless: often the same tools are used, the same skills are needed, the same objective is to be reached: remove software faults. At the same time, I can understand that traditional workflows, which most developers use, need this distinction. I just don't feel at ease with traditional workflows.

So, my question aims at better understanding what appears a controversy to me and at gathering various points of view about whether or not this separation between various levels of testing is relevant.

Is my opinion wrong? Do other workflows exist which don't emphasize on this separation (maybe agile methods)? What is your experience on the subject?

Precision: I am perfectly aware of the definitions (for those who aren't, see this question). I think I don't need a lesson about software testing. But feel free to provide some background if your answer requires it.

E_net4
  • 27,810
  • 13
  • 101
  • 139
mouviciel
  • 66,855
  • 13
  • 106
  • 140

4 Answers4

16

Performance is typically the reason I segregate "unit" tests from "functional" tests.

Groups of unit tests ought to execute as fast as possible and be able to be run after every compilation.

Groups of functional tests might take a few minutes to execute and get executed prior to checkin, maybe every day or every other day depending on the feature being implemented.

If all of the tests were grouped together, I'd never run any tests until just before checkin which would slow down my overall pace of development.

Alex B
  • 24,678
  • 14
  • 64
  • 87
  • If I understand correctly, you build your test groups according to how fast they are and how many times you run them. I think this is a good point, thank you. – mouviciel Feb 05 '09 at 21:26
  • 1
    +1 The longer the tests run, the less likely a developer is going to run the test suite. – Ian Hunter Dec 21 '12 at 23:17
  • Interesting. So maybe we don't want to differentiate between `unit/integration/functional`, but rather between `fast` and `slow` regardless of the perceived abstraction level of the test. It could be a more powerful paradigm as some simple (cli) apps might have no slow tests whatsoever and differentiation between low-level (unit) testing and high-level (integration/functional) testing in them would be indeed a futile taxonomy exercise. – Petr Skocik May 25 '16 at 07:35
  • Keeping abstraction layers separate for testing purpose also provides the benefit of helping you quickly isolate what went wrong when you changed a component. If the unit tests continue to pass but the functional tests fail, you can more quickly check to see whether a) you missed a unit test case, and possibly need to change your business logic, or b) there is a bug in your glue code. – cosmicFluke Jun 06 '18 at 20:37
  • I think, perhaps more importantly than ^, enforcing testing abstraction separation can encourage developers with a variety of backgrounds and preferences to conform to design practices that keep abstraction layers separate in implementation. You can't really unit test your implementation if you've cobbled together business logic, API logic, and data access logic into the same function. – cosmicFluke Jun 06 '18 at 20:39
9

I'd have to agree with @Alex B in that you need to differentiate between unit tests and integration tests when writing your tests to make your unit tests run as fast as possible and not have any more dependencies than required to test the code under test. You want unit tests to be run very frequently and the more "integration"-like they are the less they will be run.

In order to make this easier, unit tests usually (or ought to) involve mocking or faking external dependencies. Integration tests intentionally leave these dependencies in because that is the point of the integration test. Do you need to mock/fake every external dependency? I'd say not necessarily if the cost of mocking/faking is high and the value returned is low, that is using the dependency does not add significantly to the time or complexity of the test(s).

Over all, though, I'd say it's best to be pragmatic rather than dogmatic about it, but recognize the differences and avoid intermixing if your integration tests make it too expensive to run your tests frequently.

tvanfosson
  • 524,688
  • 99
  • 697
  • 795
  • So a pragmatic separation would be between frequent fast tests and unfrequent slow or complex tests? I've already encountered unit tests which needed heavy hardware to be run (a switch off sequence observed with a logic analyzer plugged on the CPU bus for instance). – mouviciel Feb 05 '09 at 21:20
  • 1
    @mouviciel In general, yes. Unit tests *usually* fall right into the "fast test" category whereas integration tests would *usually* take longer since they test much more code. As you've found out, though, not all programs are created equal and some pieces of code that would normally be considered a unit can take a long time to run. I would suspect, however, that for *most* programs written out there that the fast test - unit / slow test - integration differentiation works. Point being that you should have a set of tests that run quickly to show you the stability of changes just made. – Taylor Price Oct 27 '11 at 17:58
1

Definitions from my world:

Unit test - test the obvious paths of the code and that it delivers the expected results.

Function test - throughly examine the definitions of the software and test every path defined, through all allowable ranges. A good time to write regression tests.

System test - test the software in it's system environment, relative to itself. Spawn all the processes you can, explore every internal combination, run it a million times overnight, see what falls out.

Integration test - run it on a typical system setup and see if other software causes a conflict with the tested one.

tkotitan
  • 3,003
  • 2
  • 33
  • 37
0

Of course your opinion is wrong, at least regarding complex products.

The main point of automated testing is not to find bugs, but to point out function or module where the problem is.

If engineers constantly have to spend brain resources to troubleshoot test failures - then something is wrong. Of course failures in integration testing may be tricky to deal with, but that shouldn't happen often if all modules have good coverage of unit testing.

And if you get integration testing failure - in ideal world it should be instant to add corresponding (missing) unit tests for involved modules (or parts of the system), which will confirm where exactly the problem is.

But this is where an atomic bomb comes - not all systems can be properly covered with unit tests. If the architecture suffers from excessive coupling or complex dependencies - it is almost impossible to properly cover functionality with unit tests and indeed - integration testing is the only way to go, (besides deep refactoring). In such systems indeed there is no big difference between unit and integration tests.

noonex
  • 1,975
  • 1
  • 16
  • 18