25

From time to time I encounter mentions of System.nanoTime() being a lot slower (the call could cost up to microseconds) than System.currentTimeMillis(), but prooflinks are often outdated, or lead to some fairly opinionated blog posts that can't be really trusted, or contain information pertaining to specific platform, or this, or that and so on.

I didn't run benchmarks since I'm being realistic about my ability to conduct an experiment concerning such a sensitive matter, but my conditions are really well-defined, so I'm expecting quite a simple answer.

So, on an average 64-bit Linux (implying 64-bit JRE), Java 8 and a modern hardware, will switching to nanoTime() cost me that microseconds to call? Should I stay with currentTimeMillis()?

tkroman
  • 4,811
  • 1
  • 26
  • 46
  • 2
    Do you need nano second accuracy? Then you have no option. If you don't need a timestamp that accurate then don't use it... – Boris the Spider Jul 11 '14 at 15:21
  • It should be easy enough to benchmark this....just make sure you warm up the JVM before measuring. – Tim B Jul 11 '14 at 15:27
  • If you want to be realistic, then you should probably do benchmarking, as the performance and resolution [really does vary](https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks) depending on machine/OS. In general, your decision to use one over the other shouldn't be determined by performance, as they're both used in very different cases where a certain resolution is preferred over the other. Also, here's a [similar question](http://stackoverflow.com/questions/19052316/why-is-system-nanotime-way-slower-in-performance-than-system-currenttimemill). – sgbj Jul 11 '14 at 15:29
  • @BoristheSpider, up to now I didn't even need the microsecond accuracy, but I've recently made a couple of changes into my application engine (which I'm quite proud of:)) and found out that the timestamps that I used to provide were not enough, i.e. I ran the tests and they failed due to (I'd like to to go deeper) low timer resolution. So I decided to ensure that something like this won't happen again. – tkroman Jul 11 '14 at 15:35
  • One option is to implement both, and allow the user to choose the best option for their environment. historically, we've seen many issues with nano based times on windows boxes, so our app uses milliseconds based timing on windows by default. – jtahlborn Jul 11 '14 at 15:57
  • You may not get **any** improvements in resolution. I recently ran some tests on a quite powerful machine and found I was only getting +/-15ms resolution from `currentTimeMillis` anyway and as another post mentions, `nanoTime` could well give you no better. – OldCurmudgeon Jul 11 '14 at 16:24
  • @OldCurmudgeon If `nanoTime` doesn't give you any better resolution, it also degrades to the same kernel call which means that in that case it wouldn't be more expensive (except for possible costs guaranteeing monotonicity if the underlying platform is broken). After all why would an OS have two timers that give the same resolution but one is slower than the other? Obviously you could trivially replace the slower implementation with a call to the faster one. Also `currentTimeMillis` is prone to take up to 100ms on some Windows configurations. – Voo Jul 11 '14 at 23:41
  • 1
    @cdshines believe it or not, one month after I now have to do high-res timing of events, so I looked into this matter again and the approach in this article from 2003 seems to be one of the best: http://www.javaworld.com/article/2077327/core-java/my-kingdom-for-a-good-timer.html – berezovskyi Aug 11 '14 at 16:21

5 Answers5

14

As always, it depends on what you're using it for. Since others are bashing nanoTime, I'll put a plug in for it. I exclusively use nanoTime to measure elapsed time in production code.

I shy away from currentTimeMillis in production because I typically need a clock that doesn't jump backwards and forwards around like the wall clock can (and does). This is critical in my systems which use important timer-based decisions. nanoTime should be monotonically increasing at the rate you'd expect.

In fact, one of my co-workers says "currentTimeMillis is only useful for human entertainment," (such as the time in debug logs, or displayed on a website) because it cannot be trusted to measure elapsed time.

But really, we try not to use time as much as possible, and attempt to keep time out of our protocols; then we try to use logical clocks; and finally if absolutely necessary, we use durations based on nanoTime.

Update: There is one place where we use currentTimeMillis as a sanity check when connecting two hosts, but we're checking if the hosts' clocks are more than 5 minutes apart.

Michael Deardeuff
  • 10,386
  • 5
  • 51
  • 74
  • 3
    Thank you! Having a different point of view is really good. Actually the reason I'm interested in the subject is that I have to timestamp events in a system and up to some point I didn't have any problem at all, but recently I've started to notice that there are events that have the same `currentTimeMillis` timestamp, and that is quite unacceptable in my case. That's why I'm interested both in high resolution *and* performance - if two events can happen within one millisecond, there is no sense in timestamping them using the method with the lag that eliminates performance advantage, IMO. – tkroman Jul 11 '14 at 21:21
  • I mean, if two events can happen within the single millisecond, won't getting *actual* time with the resolution higher than millisecond, but getting that time with a couple microseconds lag be more important than getting fast but rough result in my case? – tkroman Jul 11 '14 at 21:26
  • 2
    Yes! Yes yes yes! I have seen too many systems fail because they were running on a machine whose clock automatically resets itself periodically and they didn't design with this in mind. If you need to measure duration against realtime, you need a monotonic clock, and in Java that means nanotime() or something based on it. (And correcting this later is a MAJOR problem in a large codebase -- it requires figuring out which currentTimeMillis() calls were being used for duration, which were for displaying wallclock time, and which were (ugh) being used for both at once. – keshlam Jul 11 '14 at 21:36
  • 1
    We've never had the performance of the system calls get in the way (we don't call it in an inner loop.) For unique timestamps like what you want, nowadays we are converting `nanoTime` to microseconds, and incrementing it by one if it collides with the prior event. This gives us 1_000_000 events/second. – Michael Deardeuff Jul 11 '14 at 21:37
  • You just have to keep in mind that every host's `nanoTimes` will be *drastically* different--they're unique per jvm. – Michael Deardeuff Jul 11 '14 at 21:39
  • @cdshines Just beware that different calls to *both* methods can easily return the same time. Considering that the precision on Windows for example is just about 300ns but the latency for calling it is about 15 (my system) - you can actually get about 20 identical timestamps on just a single thread, ignoring the obvious problems with multiple threads. So neither solution is very robust for what you want. – Voo Jul 12 '14 at 00:03
  • @Voo depends on how you use it; my earlier comment tried to explain how I have used it as a monotonically increasing timestamp for events. That scheme has been very robust. – Michael Deardeuff Jul 12 '14 at 00:08
  • @Michael But only because you stored the previous timestamp and handled collisions yourself. Certainly a solution, although doing that thread-safe seems actually pretty involved if you want to do it lock-free. The obvious solution has a race condition, although it's 2am here so I'm sure one can solve this with a CAS somewhere (edit: yep doable easily^^"). Anyhow my point was that you need an external source of uniqueness because neither timer alone will work. I didn't want to imply that you couldn't use it as part of an unique system. – Voo Jul 12 '14 at 00:20
  • @Voo indeed, its a very-low-level piece of a solution. – Michael Deardeuff Jul 12 '14 at 00:59
11

If you are currently using currentTimeMillis() and are happy with the resolution, then you definitely shouldn't change.

According the javadoc:

This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) no guarantees are made except that the resolution is at least as good as that of {@link #currentTimeMillis()}.

So depending on the OS implementation, there is no guarantee that the nano time returned is even correct! It's just the 9 digits long and has the same number of millis as currentTimeMillis().

A perfectly valid implementation could be currentTimeMillis() * 1000000

Therefore, I don't think you really gain a benefit from nano seconds even if there wasn't a performance issue.

dkatzel
  • 31,188
  • 3
  • 63
  • 67
  • additionally, nano has issues in some virtual machine environments, so definitely stick w/ `currentTimeMillis()` unless you really have a reason to switch. – jtahlborn Jul 11 '14 at 15:31
  • @jtahlborn would you like to elaborate on your statement, please? – berezovskyi Jul 11 '14 at 15:40
  • 3
    @TimB I did answer the question which was "Should I stay with currentTimeMillis()?" to which I say yes stay. – dkatzel Jul 11 '14 at 15:42
  • 1
    @berezovskiy - one example is that nano time on various OSes relies on cpu timers. these timers are supposed to be monotonically increasing values. virtual machine hosts can "simulate" a single cpu with multiple physical cpus, and thus can generate nano time readings which are _not_ monotonically increasing (i.e. they go back in time or skew all over the place). – jtahlborn Jul 11 '14 at 15:47
  • @jtahlborn oh yeah, I just read about it few minutes ago here: http://stas-blogspot.blogspot.de/2012/02/what-is-behind-systemnanotime.html – berezovskyi Jul 11 '14 at 15:49
  • 2
    `currentTimeMillis() * 1000` could only be perfectly valid for microseconds. :) – xehpuk Jul 11 '14 at 16:47
  • "there is no guarantee that the nano time returned is even correct" - umn wait what? Are you confusing precision and accuracy here? The right answer really depends on *what* you want to do, before looking it any further details. `currentTimeMillis` is not monotonic so that's absolutely a problem if you want to do benchmarks or rely on monotonicity. It's also horribly inaccurate (15ms or up to 100ms) on many systems.. – Voo Jul 11 '14 at 23:34
  • @jtahlborn The JVM has code in it to take care of that for platforms that have broken underlying timer implementations. Modern Linux and afaik also Windows (no idea about Mac OS X) shouldn't exhibit those bugs any more and in Java itself you shouldn't notice it anyhow (although the necessary fixes do cause some performance degradation.. and are again only done for nanoSeconds but not currentTimeMillis). Pretty much if you want monotonic time stick with `nanoSeconds` - it may not be absolutely perfect, but it's certainly better than any alternative. – Voo Jul 11 '14 at 23:36
  • @Voo - do you have some links to provide more information on that? i know for a fact that we still have problems with our product on windows and on virtual machines. (at least for jdk 6). – jtahlborn Jul 12 '14 at 00:20
  • @jtahlborn How interesting. Apparently the code for that only exists for Solaris, but neither Windows nor Linux (for the curious: `src/share/os//os_.cpp` - it's `os::javaTimeNanos`). Still I'm surprised - are your VMs running Windows XP? I thought since Vista (and for all relatively modern Linux kernels) there was a guaranteed monotonic timer available. There's an old bug about non monotonicity and it seems an easy fix, you may want to bring it up again and point to the existing solaris approach (basically just store the last value and use CAS). – Voo Jul 12 '14 at 00:50
  • @Voo - i can't find the article right now, but there is a good article about the state of the windows timer detailing the problems. and the virtual machine issues are a separate beast altogether. – jtahlborn Jul 12 '14 at 03:37
10

I want to stress that even if the calls would be very cheap, you will not get the nanosecond resolution of your measurements.

Let me give you an example (code from http://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime--):

long startTime = System.nanoTime();
// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;

So while both long values will be resolved to a nanosecond, JVM is not giving you a guarantee that every call you make to nanoTime(), JVM will give you a new value.

To illustrate this, I wrote a simple program and ran it on Win7x64 (feel free to run it and report the results as well):

package testNano;
public class Main {
    public static void main(String[] args) {
        long attempts = 10_000_000L;
        long stale = 0;
        long prevTime;
        for (int i = 0; i < attempts; i++) {
            prevTime = System.nanoTime();
            long nanoTime = System.nanoTime();
            if (prevTime == nanoTime) stale++;
        }
        System.out.format("nanoTime() returned stale value in %d out of %d tests%n", stale, attempts);
    }
}

It prints out nanoTime() returned stale value in 9117171 out of 10000000 tests.

EDIT

I also recommend to read the Oracle article on this: https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks. The conclusions of the article are:

If you are interested in measuring absolute time then always use System.currentTimeMillis(). Be aware that its resolution may be quite coarse (though this is rarely an issue for absolute times.)

If you are interested in measuring/calculating elapsed time, then always use System.nanoTime(). On most systems it will give a resolution on the order of microseconds. Be aware though, this call can also take microseconds to execute on some platforms.

Also you might find this discussion interesting: Why is System.nanoTime() way slower (in performance) than System.currentTimeMillis()?.

Community
  • 1
  • 1
berezovskyi
  • 3,133
  • 2
  • 25
  • 30
4

Running this very simple test:

public static void main(String[] args) {
    // Warmup loops
    long l;

    for (int i=0;i<1000000;i++) {
        l = System.currentTimeMillis();
    }

    for (int i=0;i<1000000;i++) {
        l = System.nanoTime();
    }

    // Full loops
    long start = System.nanoTime();
    for (int i=0;i<10000000;i++) {
        l = System.currentTimeMillis();
    }
    start = System.nanoTime()-start;
    System.err.println("System.currentTimeMillis() "+start/1000);

    start = System.nanoTime();
    for (int i=0;i<10000000;i++) {
        l = System.nanoTime();
    }
    start = System.nanoTime()-start;
    System.err.println("System.nanoTime() "+start/1000);

}

On Windows 7 this shows millis to be just over 2 times as fast:

System.currentTimeMillis() 138615
System.nanoTime() 299575

On other platforms, the difference isn't as large, with nanoTime() actually being slightly (~10%) faster:

On OS X:

System.currentTimeMillis() 463065
System.nanoTime() 432896

On Linux with OpenJDK:

System.currentTimeMillis() 352722
System.nanoTime() 312960
mpontillo
  • 13,559
  • 7
  • 62
  • 90
Tim B
  • 40,716
  • 16
  • 83
  • 128
  • though given you never read from the local variable `l` you may find the JIT decides to optimise-away the entire loop... – Ian Roberts Jul 11 '14 at 18:49
  • @IanRoberts I considered that but it shouldn't, as they are method calls. I just modified the test to use l and it made no difference. – Tim B Jul 11 '14 at 18:53
  • Thank you, I'll get to my working PC and take measurements on a Linux platform on Monday, I think. – tkroman Jul 11 '14 at 19:50
  • 2
    Macbook Air, Java7 System.currentTimeMillis() 463065 System.nanoTime() 432896 – korCZis Jul 11 '14 at 20:51
  • 1
    On Linux with OpenJDK `7u55-2.4.7-1ubuntu1` I see `System.currentTimeMillis() 352722; System.nanoTime() 312960` – mpontillo Jul 11 '14 at 22:26
4

Well the best thing to do in such situations is always to benchmark it. And since the timing depends solely on your platform and OS there's really nothing we can do for you here, particularly since you nowhere explain what you actually use the timer for.

Neither nanoTime nor currentTimeMillis generally guarantee monotonicity (nanoTime does on HotSpot for Solaris only and otherwise relies on an existing monotone time source of the OS - so for most people it will be monotonic even if currentTimeMillis is not).

Luckily for you writing benchmarks in Java is relatively easy these days thanks to jmh (java measuring harness) and even luckier for you Aleksey Shipilёv actually investigated nanoTime a while ago: See here - including source code to do the interesting benchmarking yourself (it's also a nice primer to jmh itself, if you want to write accurate benchmarks with only relatively little knowledge - that's the one to pick.. just amazing how far the engineers behind that project went to make benchmarking as straight-forward as possible to the general populace! Although you certainly can still fuck up if you're not careful ;-))

To summarize the results for a modern linux distribution or Solaris and a x86 CPU:

  • Precision: 30ns
  • Latency: 30ns best case

Windows:

  • Precision: Hugely variable, 370ns to 15 µs
  • Latency: Hugely variable, 15ns to 15 µs

But note Windows is also known to give you a precision of up to 100ms for currentTimeMillis in some rare situations soo.. pick your poison.

Mac OS X:

  • Precision: 1µs
  • Latency: 50ns

Be vary these results will differ greatly depending on your used platform (CPU/MB - there are some interesting older hardware combinations around, although they're luckily getting older) and OS. Heck obviously just running this on a 800 MHz CPU your results will be rather different when compared to a 3.6GHz server.

Voo
  • 29,040
  • 11
  • 82
  • 156