3

I'm looking for a cross-platform clock with high resolution, high precision, and relatively low performance impact (in order of importance).

I've tried:

//using namespace std::chrono;
//typedef std::chrono::high_resolution_clock Clock;

using namespace boost::chrono;
typedef boost::chrono::high_resolution_clock Clock;
auto now = Clock::now().time_since_epoch();
std::size_t secs = duration_cast<seconds>(now).count();
std::size_t nanos = duration_cast<nanoseconds>(now).count() % 1000000000;
std::time_t tp = (std::time_t) secs;
std::string mode;
char timestamp[] = "yyyymmdd HH:MM:SS";
char format[] = "%Y%m%d %H:%M:%S";

strftime(timestamp, 80, format, std::localtime(&tp)); // Takes 12 microseconds
std::string output = timestamp + "." + std::to_string(nanos);

After some trials and testing: The original std::chrono::high_resolution_clock is typedef to system_clock and has precision of roughly 1 millisecond. The boost::chrono::high_resolution_clock uses the Query_Performance_Counter on Windows and has high resolution and precision. Unfortunately, Clock::now() returns time since boot and now().time_since_epoch() does not return epoch time (also returns time since boot).

Don't mind using guards for different solutions on different platforms (want VS2013 and Linux). Will likely store the now and do the processing in a separate/low priority thread.

Does a cross-platform, high-resolution, high-precision, performance-friendly timer exist?

Is boost::chrono::high_resolution_clock::now().time_since_epoch() working as intended? It does not give a time since the last epoch. It only gives a time since last boot. Is there a way to convert this now() into seconds since the epoch.

user2411693
  • 533
  • 4
  • 14
  • Added question. Looking at boost doc, it seems that epoch can be arbitrarily defined. But then there is no clock independent way of converting a time_since_epoch to unix epoch time (the commonly referred to 1970 epoch). – user2411693 May 14 '15 at 13:47
  • Does this SO question and answers help? [C++ Cross-Platform High-Resolution Timer](https://stackoverflow.com/questions/1487695/c-cross-platform-high-resolution-timer)? – Neitsa May 14 '15 at 13:48
  • That post solves the issue of getting the time between two points (timers). I'm comfortable subtracting any Clock to find elapsed time. What I want is the ability to create timestamps that are yyyymmdd HH:MM:SS.NNNNNNNNN where the nanoseconds portion are internally consistent (which a steady_clock/QPC is sufficient for) and the time is a correct system time. – user2411693 May 14 '15 at 13:56
  • This is helpful: http://stackoverflow.com/questions/26128035/c11-how-to-print-out-high-resolution-clock-time-point?rq=1. But boost does not implement to_time_t, and chrono::high_resolution_clock has not really been implemented in VS2013. If there was a way to create a time_point of any arbitrary clock at the unix epoch, then I've got what I need. – user2411693 May 14 '15 at 14:11
  • Only `system_clock` is guaranteed to have any relationship to the civil calendar. And all known implementations of it simply track Unix Time: http://en.wikipedia.org/wiki/Unix_time . You can build your own custom clock types, but that still leaves you with the problem of implementing `now` in a way that meets both your epoch and precision requirements. – Howard Hinnant May 14 '15 at 14:13
  • I suppose I could take the difference between system_clock and high_resolution_clock, then save this offset and use it to relate high_resolution_clock::now() to epochtime (which should be fine as long as the two clocks don't drift significantly). I'm surprised that this isn't a more common issue. – user2411693 May 14 '15 at 14:21
  • Note that there's a difference between a "timestamp" and "clock time". [This reference](https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx) gives more explanation regarding QPC. Needing a clock time accurate to micro/nano seconds is not a common requirement and may be more complex than you think. – uesp May 14 '15 at 14:32
  • Just so we're 100% clear: Your question title says "VS2013" and the very first line in your question says "cross platform". Which do you need? – Mark B May 14 '15 at 14:45
  • @esp, thanks for the link. I see the challenge is converting a difference clock to an absolute clock. I think this should be possible given that the clock does not have to be accurate (relative to an external time source). It just needs to be internally consistent with sub-microsecond resolution. – user2411693 May 14 '15 at 15:12
  • @MarkB, I believe using std::chrono::high_resolution_clock (now and to_time_t methods) will give me an acceptable solution on Linux. Because VS2013 does not implement a high_resolution_clock (it typedefs to system_clock and has 1 millisecond precision), I need an alternative solution for VS2013. – user2411693 May 14 '15 at 15:13
  • [Here](https://github.com/WebKit/webkit/blob/master/Source/WTF/wtf/CurrentTime.cpp#L153) is how they do this in WebKit (BSD licensed code I believe). – dewaffled May 14 '15 at 15:24

1 Answers1

1

I think the nicest way to do it is to implement a new clock type that models the Clock requirement in the c++11/14 standard.

The windows function GetSystemTimePreciseAsFileTime can be used as the basis of the windows clock. I believe this function returns the time in units of 100 nano-seconds since the start of the windows epoch. If I'm wrong about that, just alter the definition of period to suit.

struct windows_highres_clock
{
    // implement Clock concept
    using rep = ULONGLONG;
    using period = std::ratio<1, 10000000>;
    using duration = std::chrono::duration<rep, period>;
    using time_point = std::chrono::time_point<windows_highres_clock, duration>;

    static constexpr const bool is_steady = true;

    static time_point now() noexcept {
        FILETIME ft = { 0, 0 };
        GetSystemTimePreciseAsFileTime(&ft);
        ULARGE_INTEGER stamp { { ft.dwLowDateTime, ft.dwHighDateTime } };
        return time_point { duration { stamp.QuadPart } };
    }
};

If you want to go ahead and implement the TrivalClock concept on top of it, that should work. Just follow the instructions at http://cppreference.com

Providing from_time_t and to_time_t member functions will complete the picture and allow you to use this clock for both timing and datetime representation.

example of use:

windows_highres_clock clock;
auto t0 = clock.now();
sleep(1);
auto t1 = clock.now();
auto diff = t1 - t0;
auto ms = std::chrono::duration_cast<chrono::milliseconds>(diff);
cout << "waited for " << ms.count() << " milliseconds\n";

example output:

waited for 1005 milliseconds

for non-windows systems the system_clock usually suffices, but you can write a similar per-system clock using the appropriate native timing mechanisms.

FYI:

Here's a portable piece of code you can use to check clock resolutions. The class any_clock is a polymorphic container that can hold any Clock-like object. It always returns its time stamps as microseconds since the epoch.

// create some syntax to help specialise the any_clock
template<class Clock> struct of_type {};

// polymorphic clock container
struct any_clock
{

    template<class Clock>
    any_clock(of_type<Clock>)
    : _ptr { new model<Clock> {} }
    {}

    std::chrono::microseconds now() const {
        return _ptr->now();
    }

    using duration = std::chrono::microseconds;

private:
    struct concept {
        virtual ~concept() = default;
        virtual duration now() const noexcept = 0;
    };
    template<class Clock>
    struct model final : concept {
        duration now() const noexcept final {
            return std::chrono::duration_cast<std::chrono::microseconds>(Clock::now().time_since_epoch());
        }
    };
    unique_ptr<concept> _ptr;
};

int main(int argc, const char * argv[])
{
    any_clock clocks[] = {
         { of_type<windows_highres_clock>() },
         { of_type<std::chrono::high_resolution_clock>() },
         { of_type<std::chrono::system_clock>() }
    };

    static constexpr size_t nof_clocks = std::extent<decltype(clocks)>::value;
    any_clock::duration t0[nof_clocks];
    any_clock::duration t1[nof_clocks];

    for (size_t i = 0 ; i < nof_clocks ; ++i) {
        t0[i] = clocks[i].now();
    }
    sleep(1);
    for (size_t i = 0 ; i < nof_clocks ; ++i) {
        t1[i] = clocks[i].now();
    }
    for (size_t i = 0 ; i < nof_clocks ; ++i) {
        auto diff = t1[i] - t0[i];
        auto ms = std::chrono::duration_cast<chrono::microseconds>(diff);
        cout << "waited for " << ms.count() << " microseconds\n";
    }
    return 0;
}
Richard Hodges
  • 68,278
  • 7
  • 90
  • 142