1

Please only answer WHEN you fully comprehend the question.

Do not close down, as there does not exist a similar question.

I am aware of System.nanoTime() gives ns from an arbitrary "random" point after the JVM starts. And I am aware that System.currentTimeMillis() only gives ms precision.

What I am looking for is for the PROOF and keep an open mind, to the hypothesis that the ms changes are not exact once we try to define what exact means.

Exact would in my world mean that everytime we were to register a new ms say, we go from 97ms, 98ms, 99ms and so forth, on every time we get an update, through whatever mechanisms, we can not expect at least observed Java to give us nanosecond precision at the switches.

I know, i know. It sounds weird to expect that, but then the question comes, how accurate are the ms switches then?

It appears to be that when you ask System.nanoTime() repeatedly you would be able to get a linear graph with nanosecond resolution.

If we at the same time ask System.currentTimeMillis() right after System.nanoTime() and we disregard the variance in cost of commands, it appears as if there would be not a linear graph on the same resolution. The ms graph would +-250ns.

This is the to be expected, yet I can not find any information on the error margin, or the accuracy of the ms.

This issue is there for second precision as well, or hour precision, day, year, and so forth. When the year comes, how big is the error?

When the ms comes, how big is the error in terms on ns?

System.currenTimeMillis() can not be trusted to stay linear against System.nanoTime() and we can not expect System.currenTimeMillis() to keep up with ns precision.

But how big is the error? In computing? In Java, in unix systems?

mjs
  • 21,431
  • 31
  • 118
  • 200
  • 1
    `System.currentTimeMillis()` uses your system clock, which makes this arguably a duplicate of https://stackoverflow.com/q/2607263/869736. – Louis Wasserman Mar 24 '21 at 20:50
  • @LouisWasserman nope, this topic is on the error margin of System.currentTimeMilis(). – mjs Mar 24 '21 at 20:51
  • 1
    "*If we at the same time ask `System.currentTimeMillis()` right after `System.nanoTime()` and we disregard the variance in cost of commands, it appears as if there would be not a linear graph on the same resolution. The ms graph would +-250ns.*" - Do you have any evidence for this claim? – Turing85 Mar 24 '21 at 20:52
  • 1
    @mmm: yes, and? As I just said, `System.currentTimeMillis()` is exactly as accurate as your system clock is, because it uses your system clock. Its error margin is the same as the error margin of your system clock. – Louis Wasserman Mar 24 '21 at 20:52
  • @Turing85 only observed. Difficult to discuss in this format. I have evidence, data output of 100 000 measurements. I can share it. But not sure you could make sense of it right now. But can we expect the ms switches to be right on the 1000000ns? – mjs Mar 24 '21 at 20:54
  • 2
    From the [`System.currentTimeMillis()` documentation](https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/lang/System.html#currentTimeMillis()): "*[...] Note that while the unit of time of the return value is a millisecond, **the granularity of the value depends on the underlying operating system** and may be larger. [...]*" Or in other words: you are at the mercy of the OS. – Turing85 Mar 24 '21 at 20:55
  • @LouisWasserman yes, that link contained some good info, thanks. but not sure if it is the exact same. – mjs Mar 24 '21 at 20:55
  • What precision do you need? – akuzminykh Mar 24 '21 at 20:56
  • I am just arguing for a document that the difference observed is not the fault of our algorithm but due to System.currentTimeMillis() not being linear enough. Comparing it with System.nanoTime() right now. – mjs Mar 24 '21 at 20:58
  • 1
    The difference is that `System.currentTimeMillis()` relies on the OS, `System.nanoTime()` does not (necessarily). – Turing85 Mar 24 '21 at 20:59
  • @Turing85 From your links, it says counts in "units of tens of milliseconds". Does that mean 1/10 of a millisecond? 100 000 ns ? An error size of up to 100 000 ns? Seems like a lot. I am on Mac, and/or linux. For the documents sake, how accurate are they? – mjs Mar 24 '21 at 21:03
  • That means an error size of up to 100 0000ns * 0.1 = `100 000ns`, which means a millisecond might be reported earlier or later up to 0.1 milliseconds off. Would this be correct? – mjs Mar 24 '21 at 21:04
  • Mine is down to 30ns, but now you showed System.nanoTime(). The main question is regarding System.currentTimeMillis(). If you can run your operations within 100ns, can you expect every new ms to be within that range too ? Do you understand what I mean? – mjs Mar 24 '21 at 21:23
  • @mmm yea I just noticed that. Working on an update... (last commen will be deleted). – Turing85 Mar 24 '21 at 21:25
  • For your info, if you run System.currentTimeMillis() in an array like you did and I did, you will need to generate at least 10 000 to see a diff for 100ns operation cost. Best you loop it. To see the error grow, you would likely need to generate 10 million. I can see very large errors with that set. 1 million not so much. You would then need code to detect error sizes. – mjs Mar 24 '21 at 21:26
  • @mmm The previous experiment was designed to have the minimal count of instructions between `System.nanoTime()`-calls, hence the initialization in an array. I corrected the code to use `System.currentTimeMillis()`. The sample can be found [in this ideone demo](https://ideone.com/ADD84d). For what I see, the precision seems to be 1 ms (at least, most deltas are 1ms). What is funny, however, is the number of iterations per 1ms-tick. – Turing85 Mar 24 '21 at 22:21

1 Answers1

3

From the documentation:

"Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.

See the description of the class Date for a discussion of slight discrepancies that may arise between "computer time" and coordinated universal time (UTC)."

So both the precision and accuracy of the call is undefined. They pass the buck to the OS and shrug. I doubt that 250 ns is an accurate measure of its quality. The gap is likely much larger than that. "Tens of milliseconds" as per the documentation is a much more likely value, especially across multiple systems.

Also they essentially disavow any knowledge of UTC as well. "Slight discrepancies" are allowed, whatever that means. Technically this allows any value at all, because what exactly is "slight?" It could be a second or a minute depending on your point of view.

Finally the system clock could be misconfigured by the person operating the system, and at that point everything goes out the window.

markspace
  • 10,621
  • 3
  • 25
  • 39
  • Yes, i've actually in some cases observed diffs very very close to 99 9888 .. but that was when i generated 10 million datapoints. i don't see that much otherwise. I guess it is true then. – mjs Mar 24 '21 at 21:11
  • 1
    I also believe the JVM will cache values for `systemTimeMillis()` and return the same value multiple times, until it's time to update that value on a timer. This is done for speed, but it's not very accurate. Don't expect `systemTimeMillis()` to increment smoothly, it's liable to make sudden large jumps. – markspace Mar 24 '21 at 21:15