I don't know any separate tools to handle this, but JUnit has an optional parameter called timeout in the @Test-annotation:
The second optional parameter, timeout, causes a test to fail if it
takes longer than a specified amount of clock time (measured in
milliseconds). The following test fails:
@Test(timeout=100) public void infinity() {
while(true);
}
So, you could write additional unit-tests to check that certain parts work "fast enough". Of course, you'd need to somehow first decide what is the maximum amount of time a particular task should take to run.
-
If the second question is relevant, then here are the issues that I
see:
- Variability depending on the environment it is run on.
There will always be some variability, but to minimize it, I'd use Hudson or similar automated building & testing server to run the tests, so the environment would be the same each time (of course, if the server running Hudson also does all other sorts of tasks, these other tasks still could affect the results). You'd need to take this into account when deciding the maximum running time for tests (leave some "head room", so if the test takes, say, 5% more to run than usually, it still wouldn't fail straight away).
- How do detect changes since micro benchmarks in Java have a large variance.
Microbenchmarks in Java are rarely reliable, I'd say test larger chunks with integration tests (such as handling a single http-request or what ever you have) and measure the total time. If the test fails due to taking too much time, isolate the problematic code by profiling, or measure and log out the running time of separate parts of the test during the test run to see which part takes the largest amount of time.
- If Caliper collects the results, how to get the results out of caliper so that they can be saved in a custom format. Caliber's
documentation is lacking.
Unfortunately, I don't know anything about Caliper.