17

Java gives access to two method to get the current time: System.nanoTime() and System.currentTimeMillis(). The first one gives a result in nanoseconds, but the actual accuracy is much worse than that (many microseconds).

Is the JVM already providing the best possible value for each particular machine? Otherwise, is there some Java library that can give finer measurement, possibly by being tied to a particular system?

jjnguy
  • 136,852
  • 53
  • 295
  • 323
penpen
  • 730
  • 1
  • 5
  • 10
  • In what operating system did you prove that System.nanoTime() was producing output that was off by "many microseconds"? – James Jones Sep 30 '09 at 22:20
  • It was on a linux system on a dual core machine, but it was a fairly old install (from early 2007...). Maybe that was the cause. I will check back on a something more recent. Also from what I remember, I had successive calls returning the same value then jumping of a few micorseconds. – penpen Sep 30 '09 at 22:35

5 Answers5

19

The problem with getting super precise time measurements is that some processors can't/don't provide such tiny increments.

As far as I know, System.currentTimeMillis() and System.nanoTime() is the best measurement you will be able to find.

Note that both return a long value.

Don Scott
  • 3,179
  • 1
  • 26
  • 40
jjnguy
  • 136,852
  • 53
  • 295
  • 323
  • Modern processors (>1ghz) cycle faster than 1 nanosecond, so they are technically quite capable. – James Jones Sep 30 '09 at 22:29
  • 2
    They could keep track of the time, but it doesn't mean that they are reporting time that accurately. – jjnguy Sep 30 '09 at 22:30
  • 3
    Don't forget that there's overhead: There's a system call involved which is typically on the order of microseconds itself (just to jump in to kernel and back out, but that's the expensive part for a clock read). Then you might have a loaded system with pre-emption enabled meaning some other process might get scheduled. Even if this isn't the case, that means you still have to jump into the JVM & even with JITed code there's going to be a slight overhead. In native gode, you can use the clock_gettime & friends API for exploring the accuracy of high-resolution timers. – Vitali Sep 30 '09 at 22:54
  • Indeed, I just tried, on my home machine, it looks like nanotime does take more than one microsecond (mean is 1.2, measured by calling it 100000 times). – penpen Sep 30 '09 at 23:06
  • linux time tick precision is 10ms by default, so asking for nano second is not useful unless you tune the kernal to support it (the url of how to tune is in my answer) – Oscar Chan Sep 30 '09 at 23:23
6

It's a bit pointless in Java measuring time down to the nanosecond scale; an occasional GC hit will easily wipe out any kind of accuracy this may have given. In any case, the documentation states that whilst it gives nanosecond precision, it's not the same thing as nanosecond accuracy; and there are operating systems which don't report nanoseconds in any case (which is why you'll find answers quantized to 1000 when accessing them; it's not luck, it's limitation).

Not only that, but depending on how the feature is actually implemented by the OS, you might find quantized results coming through anyway (e.g. answers that always end in 64 or 128 instead of intermediate values).

It's also worth noting that the purpose of the method is to find the two time differences between some (nearby) start time and now; if you take System.nanoTime() at the start of a long-running application and then take System.nanoTime() a long time later, it may have drifted quite far from real time. So you should only really use it for periods of less than 1s; if you need a longer running time than that, milliseconds should be enough. (And if it's not, then make up the last few numbers; you'll probably impress clients and the result will be just as valid.)

AlBlue
  • 23,254
  • 14
  • 71
  • 91
  • "So you should only really use it for periods of less than 1s". It is for small repeated phenomenon. "And if it's not, then make up the last few numbers". Nah, they may want to try and reproduce this :) – penpen Sep 30 '09 at 22:57
1

Unfortunately, I don't think java RTS is mature enough at this moment.

Java time does try to provide best value (they actually delegate the native code to call get the kernal time). However, JVM specs make this coarse time measurement disclaimer mainly for things like GC activities, and support of underlying system.

  • Certain GC activities will block all threads even if you are running concurrent GC.
  • default linux clock tick precision is only 10ms. Java cannot make it any better if linux kernal does not support.

I haven't figured out how to address #1 unless your app does not need to do GC. A decent and med size application probably and occasionally spends like tens of milliseconds on GC pauses. You are probably out of luck if your precision requirement is lower 10ms.

As for #2, You can tune the linux kernal to give more precision. However, you are also getting less out of your box because now kernal context switch more often.

Perhaps, we should look at it different angle. Is there a reason that OPS needs precision of 10ms of lower? Is it okay to tell Ops that precision is at 10ms AND also look at the GC log at that time, so they know the time is +-10ms accurate without GC activity around that time?

Oscar Chan
  • 1,552
  • 10
  • 13
  • "Certain GC activities will block all threads even if you are running concurrent GC." You are right, but on the other hand, with some tuning of the JVM parameters, this can be partially alleviated. And as proposed, yes, the time passed in GC can be taken into account, and removed. – penpen Sep 30 '09 at 22:49
  • My point is not that we can't tune it. My point is that you can't get GC to lower to nanosecond level that you seem to like even if you tune it. That was my definition of "decent" applications, which should already be tuned :) – Oscar Chan Sep 30 '09 at 23:27
0

If you are looking to record some type of phenomenon on the order of nanoseconds, what you really need is a real-time operating system. The accuracy of the timer will greatly depend on the operating system's implementation of its high resolution timer and the underlying hardware.

However, you can still stay with Java since there are RTOS versions available.

James Jones
  • 8,653
  • 6
  • 34
  • 46
0

JNI: Create a simple function to access the Intel RDTSC instruction or the PMCCNTR register of co-processor p15 in ARM.

Pure Java: You can possibly get better values if you are willing to delay until a clock tick. You can spin checking System.nanoTime() until the value changes. If you know for instance that the value of System.nanoTime() changes every 10000 loop iterations on your platform by amount DELTA then the actual event time was finalNanoTime-DELTA*ITERATIONS/10000. You will need to "warm-up" the code before taking actual measurements.

Hack (for profiling, etc, only): If garbage collection is throwing you off you could always measure the time using a high-priority thread running in a second jvm which doesn't create objects. Have it spin incrementing a long in shared memory which you use as a clock.