13

I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind:

  • Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time.
  • Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test).

Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit).

My question is: do my findings / leanings correspond with the thoughts of the community?

Brent Arias
  • 29,277
  • 40
  • 133
  • 234

8 Answers8

6

I agree with your findings/learnings. True unit tests only test a portion of the system, ignoring, mocking or faking the rest as necessary. Integration tests (or regression tests) test most or all of the units working together, and that is the true measure of performance.

Kaleb Brasee
  • 51,193
  • 8
  • 108
  • 113
  • 1
    I've often called the system-wide tests "functional tests." I don't know how that jibes with "the industry," but in my company the vernacular stuck. It was a bit tricky to get into place, though, since we needed some "external" system to be able to drive the application. The application ran on dedicated hardware and inputs came from custom ports that were not feasible to drive directly from the host PC. It was definitely worth it at the end, though. – dash-tom-bang May 20 '10 at 02:39
4

In some situations you can use unit tests to make sure that an operation finishes within a certain time period. If you want to add more features to your operation, but you don't want to sacrifice performance you can use unit tests to assert that. Of course, these kind of unit tests are machine dependent, but you can throw some additional variables or configuration to the equation.

serega
  • 405
  • 5
  • 10
3

Performance tests might very well be made up of unit tests.

For example, a unit test might throw several different parameters into a method and verify the method returns an expected output. A performance test might execute that unit test 1000 times (or whatever value makes sense for you) while recording everything from CPU and memory counters right down to how long each test took.

NotMe
  • 87,343
  • 27
  • 171
  • 245
3

I agree that performance tests cannot be unit tests but there is no reason we cannot have another set of tests called performance tests. Broadly the tests fall under two categories

a) Unit tests

b) Integration tests

We run integration tests again the real database (instead of in memory) to ensure the sql scripts, the hibernate repositories work as expected

My idea is we can add another set of tests called performance tests which are a part of nightly build which tests for performance of certain functions. This is important to track down the statistics after a code re factoring or to evaluate if changes to one part of application can have unintended consequence on another.

I have come across JunitPerf which might help me to achieve this objective.

vsingh
  • 6,365
  • 3
  • 53
  • 57
2

Unit tests should take no time to execute because you are only testing a very specific unit / system. Like if your system under test is ClassA : IClassA, you do your mocking / stubbing and only test the behaviour of ClassA, and should not be testing behaviour other than ClassA, such as if ClassA uses ClassB. You should inject a mock of ClassB instead of the concrete to achieve this.

In terms of performance tests, it makes sense to still use a testing framework like NUnit / MBUnit / MavenThought, just keep these tests in a separate assembly and don't invoke them as part of your unit tests.

So if you use Rake to invoke your tests, some of your tasks might look like:

Rake Test:All         #Run all unit tests
Rake Test:Acceptance  #Run all acceptance tests
Rake Test:Performance #Run all performance tests
Rake Test:Integration #Run all integration tests

Then with your continuous integration, Test:All, is always invoked after a successful build, where as Test:Performance is invoked at 12am once a day.

Sean B
  • 343
  • 3
  • 9
  • I like what you are saying, but again, the performance tests I need to run are actually integration tests - such as communication speed between a client and a server. Correct me, but I don't think a unit testing framework is prepared to deploy the client and server code, launch them separately, and give results of what the client saw for results when beating on the server. – Brent Arias May 21 '10 at 22:03
  • You could have the server implementation built and deployed to a specified location (virtual box, or remote server), and then all the performance/integration tests would be executed against that location. There are packages for deploying installations. – Sean B May 21 '10 at 23:07
1

All depends of what you call performance testing.When micro optimizing specific code I usually use something very similar to unit testing (should I call it unit performance testing ?). That's basically what I do in this question (though not caring there to really use a unit test framework). But I also do this kind of things to optimize my C++ production code within BOOST unit testing framework.

Really there is many kind of performance testing at different levels and with different purposes (heavy-load stress test, profiling, micro optimization). The performance testing you are speaking of in your question seems to be at the functional testing level. A level for which you probably won't use unit testing framework anyway.

Community
  • 1
  • 1
kriss
  • 23,497
  • 17
  • 97
  • 116
0

I remember years ago Microsoft advocated programmers performance testing their individual asp's using Visual Studio Net Application Center Test (ACT). There was (still may be out there) a whole methodology for performing Transaction Cost Analysis (TCA) on individual asp's. That said these asps could be tested using a web driver and possible mock objects to isolate the code under test (that is mimic DB access if it wasn't developed).

This approach can be followed with any Unit testing provided you have a driver and, optionally, a mock object framework to take care of any dependencies that are not yet written. This approach has also become popular with SOAPUI\LOADUI. In addition I would recommend isolating individual SQL statements that can be tested (optimized) against a given database design. This (DB) SQL unit performance testing can be done early in the SDLC and it will discover query optimization opportunities.

In terms of Cost and Value: I have found early UNIT performance testing, using Mock Objects as appropriate, will identify memory leaks and excessive CPU usage and Disk IO early in the SDLC but I would 'cherry pick' the code under tests for higher risk items.

Ian Fleming
  • 176
  • 1
  • 1
0

There are contrast differences between unit test and performance test. First and foremost, unit test is to test the application against its functional requirements. for e.g you want to ensure on clicking the Home tab the webpage navigates to home whereas performance test is a type of non functional test. Here you are concerned about the stability and responsiveness of the application under a particular user load for certain amount of time.

yash
  • 23
  • 2
  • 6