5

I was testing some algorithms which I surrounded with a nanoseconds timer when I randomly forgot to remove the timer I found out that this code:

    a = System.nanoTime();
    System.out.println(System.nanoTime() - a);

always prints 4400 nano seconds on my system. That would be 4.4 microseconds whereas this code:

    a = System.currentTimeMillis();
    for (int i = 0; i < 1000; i++)
        System.nanoTime();
    System.out.println(System.currentTimeMillis() - a);

Prints 0

Jason Aller
  • 3,541
  • 28
  • 38
  • 38

2 Answers2

4

4400 nanoseconds is 4.4 microseconds, or 0.0044 milliseconds. The second example will always print zero because the elapsed time is much less than one millisecond. Then there are the differences between the two timers used: currentTimeMillis can get adjusted for clock skew while nanoTime cannot, but I doubt that's in play here.

Jim Garrison
  • 85,615
  • 20
  • 155
  • 190
  • Oh my god i am so stupid... Mistaken micros for millis... Anyways i updated the code, and its still a bit strange. – Walter Lars Lee Aug 10 '12 at 11:44
  • 1
    Actual system timer resolution doesn't have to be nanoseconds. In your case it could be 4.4 microseconds (i.e. that's the smallest possible increment) – Jim Garrison Aug 10 '12 at 16:35
0

Using nanoseconds() to measure itself probably wasn't a use case the implementers considered. Typically, you would use it to time "slow" operations, or many calls to "fast" operations that together are "slow", where "slow" is anything over say 0.1 milliseconds.

The reason you want to time longer operations is the granularity of nanoseconds() is not that fine - depends on the hardware and OS but nowhere near a nanosecond.

Bohemian
  • 412,405
  • 93
  • 575
  • 722