55

When measuring elapsed time on a low level, I have the choice of using any of these:

System.currentTimeMillis();
System.nanoTime();

Both methods are implemented native. Before digging into any C code, does anyone know if there is any substantial overhead calling one or the other? I mean, if I don't really care about the extra precision, which one would be expected to be less CPU time consuming?

N.B: I'm using the standard Java 1.6 JDK, but the question may be valid for any JRE...

Lukas Eder
  • 211,314
  • 129
  • 689
  • 1,509

8 Answers8

48

The answer marked correct on this page is actually not correct. That is not a valid way to write a benchmark because of JVM dead code elimination (DCE), on-stack replacement (OSR), loop unrolling, etc. Only a framework like Oracle's JMH micro-benchmarking framework can measure something like that properly. Read this post if you have any doubts about the validity of such micro benchmarks.

Here is a JMH benchmark for System.currentTimeMillis() vs System.nanoTime():

@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Benchmark)
public class NanoBench {
   @Benchmark
   public long currentTimeMillis() {
      return System.currentTimeMillis();
   }

   @Benchmark
   public long nanoTime() {
    return System.nanoTime();
   }
}

And here are the results (on an Intel Core i5):

Benchmark                            Mode  Samples      Mean   Mean err    Units
c.z.h.b.NanoBench.currentTimeMillis  avgt       16   122.976      1.748    ns/op
c.z.h.b.NanoBench.nanoTime           avgt       16   117.948      3.075    ns/op

Which shows that System.nanoTime() is slightly faster at ~118ns per invocation compared to ~123ns. However, it is also clear that once the mean error is taken into account, there is very little difference between the two. The results are also likely to vary by operating system. But the general takeaway should be that they are essentially equivalent in terms of overhead.

UPDATE 2015/08/25: While this answer is closer to correct that most, using JMH to measure, it is still not correct. Measuring something like System.nanoTime() itself is a special kind of twisted benchmarking. The answer and definitive article is here.

Oliv
  • 10,221
  • 3
  • 55
  • 76
brettw
  • 10,664
  • 2
  • 42
  • 59
  • Wonderful use of JMH! I wish more microbenchmarks on this site took advantage of it. – Joe C Jun 20 '15 at 19:29
  • I wanted to try your example and wondered why the `@GenerateMicroBenchmark` annotation doesn't work. It seems it has been renamed to `@Benchmark` more or less recently. – dajood Jul 07 '16 at 10:19
26

I don't believe you need to worry about the overhead of either. It's so minimal it's barely measurable itself. Here's a quick micro-benchmark of both:

for (int j = 0; j < 5; j++) {
    long time = System.nanoTime();
    for (int i = 0; i < 1000000; i++) {
        long x = System.currentTimeMillis();
    }
    System.out.println((System.nanoTime() - time) + "ns per million");

    time = System.nanoTime();
    for (int i = 0; i < 1000000; i++) {
        long x = System.nanoTime();
    }
    System.out.println((System.nanoTime() - time) + "ns per million");

    System.out.println();
}

And the last result:

14297079ns per million
29206842ns per million

It does appear that System.currentTimeMillis() is twice as fast as System.nanoTime(). However 29ns is going to be much shorter than anything else you'd be measuring anyhow. I'd go for System.nanoTime() for precision and accuracy since it's not associated with clocks.

WhiteFang34
  • 70,765
  • 18
  • 106
  • 111
  • 1
    Nice benchmark. On my computer, it's the inverse though. I get this output: 17258920ns per million, 14974586ns per million. Which means it really depends on JVM's, processors, operating systems, etc. Apart from that the difference is almost irrelevant. Thanks for the nice answer! – Lukas Eder Apr 12 '11 at 19:35
  • No problem. Yeah, I'm sure there are all sorts of factors that will make it vary. Either way it appears you'd have to be calling it millions of times per second on a modern machine for it to be causing a noticeable timing overhead. – WhiteFang34 Apr 12 '11 at 19:40
  • Underlying OS would be interesting. I suspect that iif the implementation uses POSIX's `clock_gettime()`, the difference would be ~0. – ninjalj Apr 12 '11 at 19:40
  • True. But your outer loop seems to show that the results are somewhat consistent on the same system. Apart from the first iteration, which may have some JVM overhead, all iterations produce roughly the same values... So it's safe to choose `nanoTime()` and get the little extra precision. – Lukas Eder Apr 12 '11 at 19:42
  • @ninjalj: I'm using Windows 7. God knows how many MSDOS 6.23, Windows 3.11 for Workgroups, and Windows ME clocks are still running in parallel :) – Lukas Eder Apr 12 '11 at 19:42
  • Actually that does remind me. The precision of `System.currentTimeMillis()` is dependent on the OS. With Windows 7 and other modern systems you get 1ms accuracy. However I do very much recall only getting 10ms and 16ms resolutions. And `System.nanoTime()` wasn't available in Java at that time. IIRC on really old versions of Windows it was 50ms, which was a major pain. – WhiteFang34 Apr 12 '11 at 19:49
  • @LukasEder To your initial comment, that's because microbenchmarking in Java is no trivial matter. Significantly varied results typically imply a poorly designed microbenchmark. See http://www.javacodegeeks.com/2011/09/java-micro-benchmarking-how-to-write.html – arkon May 18 '13 at 15:42
  • @b1naryatr0phy is correct, micro benchmarking is almost always done wrong. See my answer at the bottom of this page. – brettw Mar 10 '14 at 15:32
  • 2
    Variable X is not used and therefor the system call to get the time can be optimized away. At later state, it is possible to see that no work is done within the loops, and they too can be optimized away. Leaving very little to measure. The danger of microbenchmarks and the JIT. – UnixShadow Oct 20 '15 at 08:07
  • Unfortunately this answer is definitely not correct due to optimizations that can be performed by the JIT. Need to use something like JMH to properly benchmark something like this. The biggest thing here is that the return value is discarded so the entire call to ``System#currentTimeMillis`` is most likely removed in full. – pjulien Nov 20 '16 at 15:30
11

You should only ever use System.nanoTime() for measuring how long it takes something to run. It's not just a matter of the nanosecond precision, System.currentTimeMillis() is "wall clock time" while System.nanoTime() is intended for timing things and doesn't have the "real world time" quirks that the other does. From the Javadoc of System.nanoTime():

This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.

ColinD
  • 108,630
  • 30
  • 201
  • 202
  • Yes, I know the Javadoc mentions this. But both can be used equivalently if you measure things between 1-10ms lots of times. What do you mean by "real world time quirks"? – Lukas Eder Apr 12 '11 at 19:31
  • 3
    @Lukas: The problem is that `System.currentTimeMillis()` can (I believe) do things like insert small clock adjustments which would throw off timings. Rare, probably, but `nanoTime()` is intended for measurements that shouldn't be affected by that sort of thing. – ColinD Apr 12 '11 at 19:37
  • @Lukas: Actually, it basically just reflects the system time your computer shows. Change the date on your computer to a week ago during a timing using `currentTimeMillis()` and you'll get a nice negative number! =) Not so for `nanoTime()`. – ColinD Apr 12 '11 at 19:45
  • That's not going to happen, but nice thinking :-) – Lukas Eder Apr 12 '11 at 19:46
  • 1
    It's not just manually changing the time that you have to worry about. There is also daylight savings, and clocks that auto adjust themselves after checking with a server on the internet. – Jonathan Feb 23 '15 at 04:29
8

System.currentTimeMillis() is usually really fast (afaik 5-6 cpu cycles but i don't know where i have read this any more), but it's resolution varies on different plattforms.

So if you need high precision go for nanoTime(), if you are worried about overhead go for currentTimeMillis().

Sam R.
  • 16,027
  • 12
  • 69
  • 122
Nikolaus Gradwohl
  • 19,708
  • 3
  • 45
  • 61
  • Good point. Check out WhiteFang34's answer. They seem to be somewhat equally fast, depending on the system used to benchmark them. – Lukas Eder Apr 12 '11 at 19:39
6

If you have time, watch this talk by Cliff Click, he speaks about price of System.currentTimeMillis as well as other things.

mindas
  • 26,463
  • 15
  • 97
  • 154
4

The accepted answer to this question is indeed incorrect. The alternative answer provided by @brettw is good but nonetheless light on details.

For a full treatment of this subject and the real cost of these calls, please see https://shipilev.net/blog/2014/nanotrusting-nanotime/

To answer the asked question:

does anyone know if there is any substantial overhead calling one or the other?

  • The overhead of calling System#nanoTime is between 15 to 30 nanoseconds per call.
  • The value reported by nanoTime, its resolution, only changes once per 30 nanoseconds

This means depending if you're trying to do million of requests per seconds, calling nanoTime means you're effectively losing a huge chunk of the second calling nanoTime. For such use cases, consider either measuring requests from the client side, thus ensuring you don't fall into coordinated omission, measuring queue depth is also a good indicator.

If you're not trying to cram as much work as you can into a single second, than nanoTime won't really matter but coordinated omission is still a factor.

Finally, for completeness, currentTimeMillis cannot be used no matter what its cost is. This is because it's not guaranteed to move forward between two calls. Especially on a server with NTP, currentTimeMillis is constantly moving around. Not to mention that most things measured by a computer don't take a full millisecond.

pjulien
  • 1,369
  • 10
  • 14
  • Thanks for the feedback. In thee meantime, I've also stumbled upon that article by Aleksey Shipilëv. Would you mind putting the essence of that article as an answer to my concrete Stack Overflow question, in the spirit of Stack Overflow (which is to have a full answer, possibly backed by further details in links on Stack Overflow directly)? I'll accept your answer, then. – Lukas Eder Nov 20 '16 at 16:05
  • I'm not doing this to get an accepted answer. Ultimately the issue is that the accepted answer is steering people in the wrong direction which is unfortunate. – pjulien Nov 20 '16 at 17:32
  • Updated it since you asked, but at the same, I always feel short StackOverflow like answers like this are just asking for trouble since they always leave too much room for interpretation – pjulien Nov 20 '16 at 18:27
2

At a theoretical level, for a VM that uses native threads, and sits on a modern preemptive operating system, the currentTimeMillis can be implemented to be read only once per timeslice. Presumably, nanoTime implementations would not sacrifice the precision.

Dilum Ranatunga
  • 13,254
  • 3
  • 41
  • 52
  • That's a good point about precision. If you're right, then `currentTimeMillis()` couldn't be more precise. Or because it doesn't need to be precise, it can be implemented that way. But on the other hand, that doesn't say which one is faster... – Lukas Eder Apr 12 '11 at 19:45
0

the only problem with currentTimeMillis() is that when your VM adjusts time, (this normally happens automatically) currentTimeMillis() will go with it, thus yielding unacurrate results especially for benchmarking.