3

I've built a timing loop in Java. Simple. I'm avoiding Thread.sleep() because I thread scheduling overhead makes high-resolution delays impossible, so instead I used the following highly inefficient loop and got better results:

public static void timerBlockingDelayTest()
{
    long DELAY_TARGET = 5; 
    long t0, t; 

    t0 = System.currentTimeMillis(); 
    while (System.currentTimeMillis() < t0+DELAY_TARGET) {}
    t = System.currentTimeMillis(); 

    long offTargetAmt = Math.abs(t-t0-DELAY_TARGET); 
    System.out.format("Timer loop was off target by %d milliseconds\n",
            offTargetAmt);
}

Things of which I am aware: operating system is not real-time, thread scheduling is at the whim of the OS, and GC can cause a delay.

What have I not considered?

On my machine (Windows 7 x64, i5, 2.4GHz) the best resolution I can get is about 15 ms. In fact, if I make DELAY_TARGET a multiple of 15, things work GREAT. However, if the target time is not near a multiple of 15, the offTargetAmt above will regularly be ~8 (ms).

I'm also aware of this post: high resolution timer in java

What the heck?! Is plus or minus ~8 ms seriously the best I can do??! I'm just looking for a "yes that's right" or "no you didn't consider ___" answer. Thanks

UPDATE:

Using System.nanoTime() seems to make a huge difference. I didn't believe it at first but here is my updated code that compares the two methods. See for yourself.

public static void timerBlockingDelayTest()
{
    long DELAY_TARGET_MS = 5; 
    long NS_PER_MS = 1000000; 
    long DELAY_TARGET_NS = DELAY_TARGET_MS * NS_PER_MS; 
    long t0, t; 

    // Using System.currentTimeMillis() 
    t0 = System.currentTimeMillis(); 
    while (System.currentTimeMillis() < t0+DELAY_TARGET_MS) {}
    t = System.currentTimeMillis(); 
    long msOffTarget = Math.abs(t-t0-DELAY_TARGET_MS); 

    // Using System.nanoTime()
    t0 = System.nanoTime(); 
    while (System.nanoTime() < t0+DELAY_TARGET_NS) {}; 
    t = System.nanoTime(); 
    long nsOffTarget = Math.abs(t-t0-DELAY_TARGET_NS); 

    // Compare the two methods
    System.out.format("System.currentTimeMillis() method: "); 
    System.out.format(" - Off by %d ms (%d ns) \n", 
            msOffTarget, msOffTarget*NS_PER_MS); 
    System.out.format("System.nanoTime() method:          "); 
    System.out.format(" - Off by %d ms (%d ns)\n", 
            nsOffTarget/NS_PER_MS, nsOffTarget); 
}

Here is a sample output:

debug:
System.currentTimeMillis() method:  - Off by 11 ms (11000000 ns) 
System.nanoTime() method:           - Off by 0 ms (109 ns)
BUILD SUCCESSFUL (total time: 0 seconds)

UPDATE 2 (hopefully the last):

Duh. Measuring the performance of a quantized or imperfect time function by itself is a little dumb. What I mean is that I was actually measuring the performance of currentTimeMillis() by itself, which isn't the most intelligent thing I've ever done. After realizing this, I PROFILED both of the above methods and found that indeed nanoTime() yields better resolution.

If you don't have a profiler, use nanoTime() to measure the duration of the currentTimeMillis() loop, like this:

public static void timerBlockingDelayTest()
{
    long DELAY_TARGET_MS = 5; 
    long NS_PER_MS = 1000000; 
    long DELAY_TARGET_NS = DELAY_TARGET_MS * NS_PER_MS; 
    long t0ms, t0, t; 

    // Using System.currentTimeMillis() 
    t0 = System.nanoTime(); 
    t0ms = System.currentTimeMillis(); 
    while (System.currentTimeMillis() < t0ms+DELAY_TARGET_MS) {}
    t = System.nanoTime(); 
    long nsOffTarget1 = Math.abs(t-t0-DELAY_TARGET_NS); 

    // Using System.nanoTime()
    t0 = System.nanoTime(); 
    while (System.nanoTime() < t0+DELAY_TARGET_NS) {}; 
    t = System.nanoTime(); 
    long nsOffTarget2 = Math.abs(t-t0-DELAY_TARGET_NS); 

    // Compare the two methods
    System.out.format("System.currentTimeMillis() method: "); 
    System.out.format(" - Off by %d ms (%d ns)\n", 
            nsOffTarget1/NS_PER_MS, nsOffTarget1); 
    System.out.format("System.nanoTime() method:          "); 
    System.out.format(" - Off by %d ms (%d ns)\n", 
            nsOffTarget2/NS_PER_MS, nsOffTarget2); 
}

At least that way I'm measured both delays by the same reference, which is only slightly more intelligent. The above gives an output like this:

debug:
System.currentTimeMillis() method:  - Off by 4 ms (4040402 ns)
System.nanoTime() method:           - Off by 0 ms (110 ns)
BUILD SUCCESSFUL (total time: 0 seconds)

Conclusion: use nanoTime(), and have a great day.

Community
  • 1
  • 1
JLindsey
  • 700
  • 2
  • 7
  • 14
  • 1
    `System.nanotime` will give you higher resolution time: http://stackoverflow.com/questions/351565/system-currenttimemillis-vs-system-nanotime – Alex Kleiman Aug 16 '14 at 18:58
  • 1
    https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks – JLindsey Aug 16 '14 at 18:58
  • 1
    @JaredLindsey The details in that link are *much* better than in the linked answers (although there still might be a better answer I haven't seen), as it actually throws down some expected numbers.. I'd upvote an answer that appropriately summarizes/highlights said resource in context. – user2864740 Aug 16 '14 at 18:59
  • I've done some profiling and it is very clear that using nanoTime() is a much better approach, for everyone else in the future who has this issue. Thanks everyone. – JLindsey Aug 16 '14 at 19:40

1 Answers1

3

Use System.nanoTime instead. See this answer about the difference between nanoTime and currentTimeMillis.

Community
  • 1
  • 1
Craig Gidney
  • 17,763
  • 5
  • 68
  • 136
  • Holy cow. I didn't believe it...but I do now. Wow. I'm gonna post my edited code and show the difference – JLindsey Aug 16 '14 at 19:07