I'm executing the following code in Java to print the elapsed CPU time (both in nano and milliseconds):
ThreadMXBean bean = ManagementFactory.getThreadMXBean();
long t = bean.getCurrentThreadCpuTime();
int c = 0;
while (c < 100) {
long u = bean.getCurrentThreadCpuTime();
if (t != u) {
System.out.println(u + " " + (u / 1000000.0));
t = u;
c++;
}
}
both in Mac and Windows:
- Mac (macOS Sierra, 10.12.3), Oracle JDK, 1.7.0_79
- Windows (Windows 10), Oracle JDK, 1.7.0_71
On Mac, I'm getting the following output (first column is nanoseconds, second column milliseconds):
152955000 152.955
153749000 153.749
156080000 156.08
156161000 156.161
156205000 156.205
156246000 156.246
156301000 156.301
156364000 156.364
156429000 156.429
156471000 156.471
156513000 156.513
156552000 156.552
156603000 156.603
156645000 156.645
156691000 156.691
156731000 156.731
156787000 156.787
...
while on Windows, the output is:
109375000 109.375
125000000 125.0
140625000 140.625
156250000 156.25
171875000 171.875
187500000 187.5
203125000 203.125
218750000 218.75
234375000 234.375
250000000 250.0
265625000 265.625
281250000 281.25
296875000 296.875
312500000 312.5
328125000 328.125
343750000 343.75
359375000 359.375
...
You can notice how the Unix output is completely smooth and continuous, while on Windows every iteration is separated by several (around 15) milliseconds.
I've observed this in several versions of Java, Windows and Mac.
Does anybody have an idea of why is this happening? Is there a better way to measure CPU time (i.e. System.nanoTime() not wanted) on Windows in milliseconds?
Thanks,
Diego.