0

I was trying to calculate the performance of my algorithm by first implementing it in Java and then calculating the processor time for different inputs. I used ThreadMXBean.getCurrentThreadCpuTime() to get the system time.

I profiled it first using the following code:

calculateProfile() {
   for (int n = 10; n <= 200; n++) {
   long time_beg = getCurrentThreadTime();
   run_my_algorithm(n);
   long time_end = getCurrentThreadTime();
   System.out.println("Output = " + n + " " + (time_end - time_beg));
}

However, I was surprised to see that with an increase in the value of n, the time profile does not increase as expected. In some cases, the execution time actually decreased, although my algorithm has a time complexity of O(n).

To look into the problem, I changed this code to run my algorithm once on each execution, and manually checked the time profile for each execution instance. In this case, I find a linear increase in execution time with respect to n. So my question is: How are the two cases different? What is making this difference in profile patterns? And, how to best automate this profiling process in that case?

Luiggi Mendoza
  • 85,076
  • 16
  • 154
  • 332
Arani
  • 400
  • 6
  • 21
  • 5
    You're not profiling the code but measuring the time it takes at runtime, and you're doing it wrong. Refer to [How do I write a correct micro-benchmark in Java?](http://stackoverflow.com/q/504103/1065197) to understand why. Use a framework like caliper or JMH. – Luiggi Mendoza Jun 24 '15 at 15:52
  • @LuiggiMendoza I am going through the link -- but I have one related question. What is wrong with ThreadMXBean? – Arani Jun 24 '15 at 15:58
  • Basically, nothing. The problem is that you're forgetting that the JVM will optimize the code due to JIT so you should first have a warm up phase of the piece of code being executed and other factors already mentioned in the link I've sent to you. – Luiggi Mendoza Jun 24 '15 at 16:09

0 Answers0