67

Today I did a little quick Benchmark to test speed performance of System.nanoTime() and System.currentTimeMillis():

long startTime = System.nanoTime();

for(int i = 0; i < 1000000; i++) {
  long test = System.nanoTime();
}

long endTime = System.nanoTime();

System.out.println("Total time: "+(endTime-startTime));

This are the results:

System.currentTimeMillis(): average of 12.7836022 / function call
System.nanoTime():          average of 34.6395674 / function call

Why are the differences in running speed so big?

Benchmark system:

Java 1.7.0_25
Windows 8 64-bit
CPU: AMD FX-6100
Frithjof
  • 2,214
  • 1
  • 17
  • 38
  • possible duplicate of [Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly?](http://stackoverflow.com/questions/5839152/why-do-system-nanotime-and-system-currenttimemillis-drift-apart-so-rapidly) – chrylis -cautiouslyoptimistic- Sep 27 '13 at 13:46
  • This post might answer your question: http://stackoverflow.com/a/5839267/658907 – matts Sep 27 '13 at 13:47
  • `nanoTime` is more precise than `currentTimeMillis`, that might be the reason. – Thomas Sep 27 '13 at 13:47
  • 10
    Disagree that this is a duplicate question. This is asking about speed of execution. The other question is asking about drift. Maybe the answer is similar, but the question is not. – Erick Robertson Sep 27 '13 at 15:42
  • Doesn't answer your question but even the "slow" `nanoTime` only takes 34 *nanoseconds* to execute. I don't see a lot of use cases where that is so slow it becomes a problem. – yannick1976 Mar 18 '20 at 14:04

5 Answers5

71

From this Oracle blog:

System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information.

System.nanoTime() is implemented using the QueryPerformanceCounter/ QueryPerformanceFrequency API (if available, else it returns currentTimeMillis*10^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency).

Perhaps this answer the question. The two methods use different number of clock cycles, thus resulting in slow speed of the later one.

Further in that blog in the conclusion section:

If you are interested in measuring/calculating elapsed time, then always use System.nanoTime(). On most systems it will give a resolution on the order of microseconds. Be aware though, this call can also take microseconds to execute on some platforms.

user207421
  • 305,947
  • 44
  • 307
  • 483
Rohit Jain
  • 209,639
  • 45
  • 409
  • 525
  • 12
    Nice answer. Also this website: http://stas-blogspot.blogspot.nl/2012/02/what-is-behind-systemnanotime.html Someone did a little research and also shows the sourcecode behind System.nanoTime on different OSes. – MystyxMac Sep 27 '13 at 14:16
24

Most OS's (you didn't mention which one you are using) have an in memory counter/clock which provides millisecond accuracy (or close to that). For nanosecond accuracy most have to read a hardware counter. Communicating with hardware is slower then reading some value already in memory.

Eelke
  • 20,897
  • 4
  • 50
  • 76
5

It may only be the case on Windows. See this answer to a similar question.

Basically, System.currentTimeMillis() just reads a global variable maintained by Windows (which is why it has low granularity), whereas System.nanoTime() actually has to do IO operations.

Oliv
  • 10,221
  • 3
  • 55
  • 76
Michael Borgwardt
  • 342,105
  • 78
  • 482
  • 720
1

You are measuring that on Windows, aren't you. I went through this exercise in 2008. nanoTime IS slower on Windows than currentTimeMillis. As I recall, on Linux, nanotime is faster than currentTimeMillis and is certainly faster than it is on Windows.

The important thing to note is if you are trying to measure the aggregate of multiple sub-millisecond operations, you must use nanotime as if the operation finished in less than 1/1000th of a second your code, comparing currentTimeMillis will show the operation as instantaneous so 1,000 of these will still be instantaneous. What you might want to do is use nanotime then round to the nearest millisecond, so if an operation took 8000 nanoseconds it will be counted as 1 millisecond, not 0.

Walt Corey
  • 718
  • 7
  • 12
0

What you might want to do is use nanotime then round to the nearest millisecond, so if an operation took 8000 nanoseconds it will be counted as 1 millisecond, not 0.

Arithmetic note:

8000 nanoseconds is 8 microseconds is 0.008 milliseconds. Rounding will take that to 0 milliseconds.

dave
  • 21