3

I have been using the following clock definition for a frame timer for years now:

    using frame_clock = std::conditional_t<
        std::chrono::high_resolution_clock::is_steady,
        std::chrono::high_resolution_clock,
        std::chrono::steady_clock>;

In other words, I want a clock that is defined using the highest possible resolution, but it must increment monotonically. Note that MSVC currently uses the following alias to define std::chrono::high_resolution_clock:

    using high_resolution_clock = steady_clock;

Therefore, on MSVC, the alias I have defined will just use std::chrono::steady_clock. This is not necessarily true for libstdc++ and libc++, hence the use of the alias.

Recently, I stumbled across a footnote here: https://en.cppreference.com/w/cpp/chrono/high_resolution_clock

Notice that cppreference explicitly discourages the use of std::chrono::high_resolution_clock. Their rationale was that the clock varies by implementation... but isn't this true for std::chrono::steady_clock and std::chrono::system_clock as well? For instance, I was unable to find anything that guaranteed that the clock periods between clocks must be in certain units. In fact, I understand that is by design.

My question is, after having used std::chrono::high_resolution_clock for so many years (for frame timers and benchmarks), should I be more concerned than I am? Even here on this site, I see many recommendations to use std::chrono::high_resolution_clock, despite what this footnote says. Any kind of further insight on this disparity, or examples of where this could cause problems would be much appreciated.

  • I guess it begs the question: what are you using `frame_clock` for? – YSC Apr 23 '21 at 16:28
  • It is a frame timer. So it's used in applications where processing must be divided into frames. For instance, embedded realtime applications where all processing must fit into a 3600 fps frame, or video games which are more forgiving at 60 fps. I also often use `std::chrono::high_resolution_clock` for benchmarks, though, I often care less if it's steady in those applications since I can just run the benchmarks over and over. – Christopher Mauer Apr 23 '21 at 16:36
  • And on those dedicated systems, how often does the system clock change? Do you do utc leap seconds? Have you dst enabled? – YSC Apr 23 '21 at 18:06
  • @YSC Sometimes I have seen implementations where the processor scalar or other frequency-controlling parameters change to help support specific baud rates or ODRs. We don't GHz clock speeds to minimize the errors, so errors become significant, and we are in safety-critical applications. When we mess with this, we of course have to update our system clock to account for differences in processor frequency and keep our frame time constant. We do take leap seconds into account for GNSS applications. – Christopher Mauer Apr 23 '21 at 18:30
  • 1
    That might ne bad then, yes. Plus, the type of bug these change can lead to are particularly difficult ti track for, which doesn't help. – YSC Apr 23 '21 at 18:38

3 Answers3

4

For practical purposes, you only have 3 choices:

  • For taking real time, your only choice is std::system_clock (if you want to stay inside C++, OS levels routines do exist)
  • For measuring intervals, if you want to stay within C++, you have to use std::steady_clock. There is no implementation out there which would have a steady clock with resolution higher than you get with std::steady_clock
  • Viable alternative to above if you are eager to sacrifice C++ conformance is using TSC counters directly. This is highest possible resolution one can ever see and is also the fastest to use. The downside is that if you want to measure units of time rather than cycles, you'd have to convert cycles to seconds using CPU cycle rate.
SergeyA
  • 61,605
  • 5
  • 78
  • 137
  • Generally, unless the platform is particularly kind to me, `system_clock` is... well, in embedded applications, it's often something I have to define myself. We are often defining frames similarly to what an OS would provide, so we often cannot rely on the OS to provide us this information. In theory though, I think I agree with you. Actually, your last point is a great one for RTOS applications. Would it follow that you would implement `system_clock` using CPU cycle counters and the current frequency? – Christopher Mauer Apr 23 '21 at 16:47
  • @ChristopherMauer to be honest, I do not have that much experience with embedded. You certainly should be able to use TSC for time for system clock, but I guess, the question would be how do you set initial time when the chip was powered? – SergeyA Apr 23 '21 at 16:49
  • 1
    Here's an example of how to integrate a TSC instruction into chrono on x86: https://stackoverflow.com/a/11485388/576911 – Howard Hinnant Apr 23 '21 at 16:53
  • @HowardHinnant yeah, this is pretty trivial on modern x86. – SergeyA Apr 23 '21 at 16:55
  • @SergeyA Embarrassingly, we usually just set system time to 0 on power up when we don't have network access. We could save the last time in NVM though, and use system time as a... total elapsed runtime kind of thing. I digress. – Christopher Mauer Apr 23 '21 at 16:58
  • @HowardHinnant Thanks. I don't work on x86 much... usually ARM and microcontrollers. That said, I found your link pretty useful. The general approach is roughly the same. – Christopher Mauer Apr 23 '21 at 17:00
  • 1
    @ChristopherMauer just make sure that ARM has TSCs monotonic and synchronized over multiple CPUs. That was not always the case with x86, but I am not familiar w/ ARM. – SergeyA Apr 23 '21 at 17:03
  • @SergeyA Lots of good answers, but I felt yours gave me the best suggestions for a path forward. – Christopher Mauer Apr 23 '21 at 17:14
2

What you've read is essentially the advice I have been giving for the past handful of years.

It isn't that high_resolution_clock is dangerous. It is just that it is rather useless. This is because it is always aliased to either system_clock or steady_clock. And so you might as well choose system_clock or steady_clock so that you know which one you're getting.

steady_clock always has is_steady == true. That's a requirement. Additionally system_clock never has is_steady == true. It isn't actually a requirement, but unless your computer has a clock that keeps perfect time, it will need adjusting occasionally to set it to the correct time. And a clock that can be adjusted must have is_steady == false.

Your frame_clock alias is just a fancy way of saying:

using frame_clock = std::chrono::steady_clock;
Howard Hinnant
  • 206,506
  • 52
  • 449
  • 577
  • Ninja'd. I was literally pointing OP to your past answers anyway. – Casey Apr 23 '21 at 16:40
  • If you want to undelete your post, I'll upvote it. :-) – Howard Hinnant Apr 23 '21 at 16:40
  • For posterity, sure. :) – Casey Apr 23 '21 at 16:41
  • This makes sense. It does beg the question of what happens with the addition of the various C++20 clocks. If, say, `gps_clock` has a higher resolution than `steady_clock`, then it should follow that `high_resolution_clock` should be aliased as `gps_clock`. In practice, this makes little sense given that gps updates at 1 Hz. But the interesting part is that `gps_clock` makes no guarantees that the implementation is steady or not. In which case, the alias could theoretically evaluate to something other than what you've suggested... albeit overly pedantic. – Christopher Mauer Apr 23 '21 at 16:54
  • 1
    @ChristopherMauer proliferation of clocks in C++ puzzles me. I can see myself as a customer of only two clocks: the highest resolution which reflects real time and the highest resolution which is stead and monotonic. I do not see why I need anything more than 2 of those. – SergeyA Apr 23 '21 at 17:01
  • @SergeyA I somewhat agree. GPS time is odd, because it has this thing called "leap seconds". The conversion they added between UTC time and GPS time was highly desirable for navigation analysists such as myself. Time bases and conversions between these time bases are one of the most challenging things we deal with. I cannot imagine they would be at all useful outside of my field, though. – Christopher Mauer Apr 23 '21 at 17:05
  • On a typical consumer computer, `gps_clock` will be implemented in terms of `system_clock`, and thus not be steady. However the spec allows `gps_clock` to be reading from a GPS receiver, and so could possibly be steady. Ditto for the other clocks. – Howard Hinnant Apr 23 '21 at 17:05
  • @HowardHinnant I would almost expect `gps_clock` to be unsteady all the time. A receiver usually outputs time of week in milliseconds, but given that `gps_clock` is based on the gps epoch, I'd expect leap seconds to cause the clock to become unsteady over that long period. However, if this were the case, why doesn't the standard specify that `gps_clock` is unsteady? – Christopher Mauer Apr 23 '21 at 17:09
  • 2
    GPS time physically measures leap seconds, but doesn't count them the same way as UTC does. Instead of marking a 61st second in the minute, GPS time just rolls into the next minute (maintaining steadyness). Thus its "human calendar" gets ahead of UTC by a second, during a leap second. For example, here are the current UTC and GPS times for comparison: http://leapsecond.com/java/gpsclock.htm – Howard Hinnant Apr 23 '21 at 17:12
  • Ah, that makes sense. I am having Friday brain today. Thanks. Great discussion! – Christopher Mauer Apr 23 '21 at 17:18
2

Yes, you should be concerned. high_resolution_clock is implementation defined. Don't let the implementaiton pick, just use steady_clock directly.

Howard Hinnant wrote a great comparison between steady and system clocks and wishes he had never added high_resolution_clock in the first place.

As before, stick to using std::chrono::steady_clock directly instead of letting the implementation pick.

Casey
  • 10,297
  • 11
  • 59
  • 88