1

I am designing some test cases for my application. Assume that I have a method called

ProcessedDataObject createProcessedDataObject(RawDataHolder input)

The performance of createProcessedDataObject is vital so I would like to include some benchmark performance testing against constant inputs (e.g. building input from a local text file so it does not vary) and fail if it takes more than a certain number of nanos for every future modification of the method. IOW, I want the test to be designed to raise a red flag in case future changes add complexity to exceed the time benchmark.

My question is: does benchmark performance testing belong in the realm of JUnit or should I keep it outside?

amphibient
  • 29,770
  • 54
  • 146
  • 240
  • 1
    I'd say that's a matter of opinion. The more pressing concern is: regardless of the framework, can you set up an environment that will allow for reliable and meaningful automated performance regression tests? – Oliver Charlesworth Apr 01 '13 at 23:35
  • So, would you say that if securing such a stable sandbox environment were too much of a moving target, that would invalidate this kind of test as being a unit test? – amphibient Apr 01 '13 at 23:40
  • 1
    I wouldn't use JUnit for this task - its hard to get the benchmark to a stable starting point, especially if your test would be running as part of a suite (of other tests). – Perception Apr 01 '13 at 23:42
  • Well, it's not so much the "unit" that I'm thinking about. It's the scenario where you have your performance test as part of a continuous-integration suite. If you don't have a stable environment (e.g. you have multiple tests or other stuff running in parallel, or the test gets farmed about to an arbitrary slave node, or the OS is doing something with the disks at the time the test runs), you may see random passes/failures which don't actually correlate with anything real. – Oliver Charlesworth Apr 01 '13 at 23:43

2 Answers2

2

If you are just looking for a ballpark estimate, then JUnit is fine. However, you don't want to include benchmarks in the suite of tests you use when doing a build, since that will slow your build down. If you are looking to publish your benchmark, then in my opinion, JUnit should not be used.

Rob Breidecker
  • 604
  • 1
  • 7
  • 12
1

I'd suggest you split your test cases into functional unit tests, and non-functional integration tests. If you're using Maven you can use Surefire + Failsafe to achieve this. You can still write your performance tests as JUnit tests, but this way you logically separate them so that function and performance testing are separated.

Without knowing too much about your implementation, I'd hypothesize that your performance tests are actually integration tests, anyway...

And as the comments suggest, you'll need to be careful about your pass/fail criteria and your threshold values.

e.g. If my test limit is 100 nanos and it takes 101 nanos, is that unacceptable enough to fail the build? Should I run the same test 10 times and take an average? Should I run the same test 10 times and make sure each time that it's below the threshold? How many test runs should I do? All of these (and more) are considerations that you should be making.

Catchwa
  • 5,845
  • 4
  • 31
  • 57
  • how do i split them into different categories? please see my newer post, it deals with that exactly: http://stackoverflow.com/questions/15755389/running-benchmark-methods-in-a-junit-test-class – amphibient Apr 02 '13 at 04:38