-1

It is not a duplicated question. If you doubt, read it to the end.

Problem: I am creating a web page that is shown in a SmartTV browser and I can't rely or trust this device's clock. The time is always wrong and there is no date registry.

Solution: I use ajax to get my location's current date and time from a time server API. It sends back year, month, week day, month day, hour and minutes. I don't care about the exactly seconds.

How to: What I don't know how to do is to get this information and turn it to a working clock that refreshes in the browser when time and day goes up. I fetch the time server every hour to ensure it still in sync, but between this hour, how can I make the JavaScript counts the minutes, hours, days, months and years correctly?

NaN
  • 8,596
  • 20
  • 79
  • 153
  • So, you're saying you don't know how to do simple math? I must be failing to understand the question? There are 60 seconds in a minute, 60 minutes in an hour. What else do you need to know, how to do JavaScript? This would have to do with what your Client is seeing, which should be that time you got via AJAX. Parse it out and do the math. – StackSlave Aug 06 '16 at 00:57
  • If I were sure what to do, would I make that question? I did not realize that SO was now to be used by "I already know everything" people. – NaN Aug 07 '16 at 12:22

1 Answers1

4

Option 1: use setInterval(). For example:

var numElapsedSeconds = 0;
setInterval(function() {
    numElapsedSeconds++;
}, 1000);

Then add numElapsedSeconds to the last timestamp you received from the server and you have a (more or less) reliable current time.

Option 2: when you get an update from the server, also check the local clock, figure out the delta, and from that point on use the delta with the local clock to get the "real" time. Even if the local time settings are wrong, I'm assuming that time still progresses at the same speed as on the server (maybe unless the user tweaks it all of a sudden). Not sure what your use-case is but maybe this could be reliable enough...

You could also combine the two options for maximum accuracy (to compensate for the interval not being called exactly every 1000 milliseconds).

obe
  • 7,378
  • 5
  • 31
  • 40
  • I don't think this will be reliable. In only 10 seconds it was already off by a non trivial amount: 14.174 - 23.227 – bryanmac Aug 06 '16 at 01:15
  • @bryanmac What's "14.174 10 23.227"? Also, I did suggest to use a combination of both options. Combining the two should make it very reliable (almost as reliable as the internal clock, with two "glitches" - one in case the user changes the local time (in which case the code would have to temporarily fall back on the interval), and two - due to the latency of getting the time from the server (which can also be partly mitigated by various methods...)) – obe Aug 06 '16 at 01:19
  • It was 10 iterations of setTimeout with now second.millisecond. – bryanmac Aug 06 '16 at 01:21
  • http://stackoverflow.com/questions/8173580/setinterval-timing-slowly-drifts-away-from-staying-accurate – bryanmac Aug 06 '16 at 01:21
  • He said he gets from server every hour. I imaging it could drift quite a bit in an hour. – bryanmac Aug 06 '16 at 01:22
  • 1
    Another option might be to return server time as a response header on every request. Then do a setInterval off of that. Drift would be minimized. You also don't need a separate hourly request to get time. It's piggy backed via a response header which is what response headers are for. – bryanmac Aug 06 '16 at 01:24
  • Right, that's why I suggested to also use the local time... (according to a delta that would be calculated when getting the time from the server) - and keep the `setInterval` at the back just to detect when the user changes the local time to prevent strange phenomena... – obe Aug 06 '16 at 01:24
  • OK - I did more tests and I was drifting about 250 - 300ms about every minute. I still like response header on every request + timer. :) – bryanmac Aug 06 '16 at 01:30
  • That's assuming the application is making these requests :) but another reason against it (IMO) is that it adds a layer of functionality for the server side for every API call that would be there just to satisfy a pretty localized problem. If the same API serves (or would serve in the future) other clients - the time stamp may be redundant. So IMO from good design perspective, if the localized solutions are satisfactory - it's better to stick to them and not to add this to every API call (for conceptual reasons; even if it's just a small response header...) – obe Aug 06 '16 at 01:41
  • I guess I could set timeInterval to run every minute, not showing seconds to user, and consult the server in less time, like 30min, would that minimize the drift? – NaN Aug 07 '16 at 12:20