I run numerical simulations all the time. I can tell if my simulations don't work (i.e., they fail to give acceptable answers), but because I typically run a variable number of these on designated cores running in the background (as I work), looking at clock time tells me less than nothing about how quickly they ran.
I don't want clock time; I want CPU time. None of the articles seems to mention this little aspect. In particular, the recommendation to use a "quiet" machine seems to blur what's being measured.
I don't need a great deal of detail, I just want to know that simulation A runs about 15% faster or slower than simulation B or C, despite the fact that A ran by itself for a while, and then I started B, followed by C. And maybe I played for a little while before retiring, which would run a higher-priority application for part of that time. Don't tell me that ideally I should use a "quiet" machine; my question specifically asks how to do benchmarking without a dedicated machine for this. I also do not wish to kill the efficiency of my applications while measuring how long they take to run; it seems that significant overhead would only be required when a great deal of detail is needed. Am I right?
I want to modify my applications so that when I check whether a batch job succeeds, I can also see how long it took to reach these results in CPU time. Can benchmarking give me the answers I'm looking for? Can I simply use Java 9's benchmarking harness, or do I need something else?