Even more important update: I tried the same code on Java 8, and System.currentTimeMillis()
works as expected:
1449242048.588 368936.080131
1449242048.590 368936.082132
1449242048.592 368936.084132
1449242048.594 368936.086132
1449242048.596 368936.088133
1449242048.598 368936.090133
1449242048.600 368936.092133
1449242048.602 368936.094133
1449242048.604 368936.096133
1449242048.606 368936.098134
1449242048.608 368936.100134
1449242048.610 368936.102134
Main run took 20.036 seconds. Expected time: 20.0
Thanks to @Tunaki for the hint.
Update: I think it's useful to quote the link referenced in the response to the duplicate question here:
System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).
(emphasis mine)
I have some data recorded with timestamps derived from System.nanoTime()
. When I play it back, I use a thread that spins in a tight loop calling System.nanoTime()
until enough nanoseconds elapse to duplicate the difference between the timestamps.
I've noticed that mixing System.currentTimeMillis()
with this kind of playback produces very strange results, at least on Windows 7. The nano time updates as expected, but milli time jumps 16 millis at a time.
Overall waiting time (measured using millis) is still as expected. Here is the tail end of the log of the little program I wrote to illustrate this:
1449238154.610 365042.452692
1449238154.626 365042.454692
1449238154.626 365042.456692
1449238154.626 365042.458692
1449238154.626 365042.460692
1449238154.626 365042.462693
1449238154.626 365042.464693
1449238154.626 365042.466693
1449238154.626 365042.468693
1449238154.642 365042.470693
1449238154.642 365042.472694
1449238154.642 365042.474694
1449238154.642 365042.476694
1449238154.642 365042.478694
1449238154.642 365042.480694
Main run took 20.0 seconds. Expected time: 20.0
To the left is millis time, to the right is nanos time, both normalized to seconds. As you can see, millis time is not updated every millisecond but rather about once per 16 milliseconds.
Can someone explain what's going on?
I am attaching the code I wrote for testing this effect. The environment is Windows 7 Enterprise, Java 7, 8 cores on 2 CPUs.
public class NanotimeTest {
private static final long DELAY_NANOS = 2_000_000;
private static final int ITERATIONS = 10_000;
private static class Times {
public long millis;
public long nanos;
}
Times[] times = new Times[ITERATIONS];
public static void main(String[] args) {
new NanotimeTest().run();
}
private void run() {
for (int i = 0; i < ITERATIONS; i++) {
times[i] = new Times();
}
System.out.println("Starting warmup");
produceTimes(1000);
System.out.println("Starting main run");
long start = System.currentTimeMillis();
produceTimes(DELAY_NANOS);
long end = System.currentTimeMillis();
for(Times t : times) {
System.out.printf("%.3f %f\n", t.millis / 1000.0, t.nanos / 1_000_000_000.0);
}
System.out.println("Main run took " + (end - start)/1000.0 + " seconds. Expected time: " + (DELAY_NANOS * ITERATIONS)/1_000_000_000.0);
}
private void produceTimes(long delayNanos) {
for (int i = 0; i < ITERATIONS; i++) {
long start;
times[i].millis = System.currentTimeMillis();
times[i].nanos = start = System.nanoTime();
while (System.nanoTime() - start < delayNanos) {
// go on
}
}
}
}