I'm looking to implement a simple timer mechanism in C++. The code should work in Windows and Linux. The resolution should be as precise as possible (at least millisecond accuracy). This will be used to simply track the passage of time, not to implement any kind of event-driven design. What is the best tool to accomplish this?
-
5Be more specific. Are you timing a function call or do you want to receive some kind of signal after specified period of time. Those are both "simple" timer applications but they are implemented very differently. Note, the use of "simple" in quotes: timing in general purpose computers is never "simple". – jmucchiello Sep 28 '09 at 15:31
-
C version http://stackoverflow.com/questions/361363/how-to-measure-time-in-milliseconds-using-ansi-c – Ciro Santilli OurBigBook.com Mar 18 '16 at 22:06
14 Answers
Updated answer for an old question:
In C++11 you can portably get to the highest resolution timer with:
#include <iostream>
#include <chrono>
#include "chrono_io"
int main()
{
typedef std::chrono::high_resolution_clock Clock;
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << t2-t1 << '\n';
}
Example output:
74 nanoseconds
"chrono_io" is an extension to ease I/O issues with these new types and is freely available here.
There is also an implementation of <chrono>
available in boost (might still be on tip-of-trunk, not sure it has been released).
Update
This is in response to Ben's comment below that subsequent calls to std::chrono::high_resolution_clock
take several milliseconds in VS11. Below is a <chrono>
-compatible workaround. However it only works on Intel hardware, you need to dip into inline assembly (syntax to do that varies with compiler), and you have to hardwire the machine's clock speed into the clock:
#include <chrono>
struct clock
{
typedef unsigned long long rep;
typedef std::ratio<1, 2800000000> period; // My machine is 2.8 GHz
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<clock> time_point;
static const bool is_steady = true;
static time_point now() noexcept
{
unsigned lo, hi;
asm volatile("rdtsc" : "=a" (lo), "=d" (hi));
return time_point(duration(static_cast<rep>(hi) << 32 | lo));
}
private:
static
unsigned
get_clock_speed()
{
int mib[] = {CTL_HW, HW_CPU_FREQ};
const std::size_t namelen = sizeof(mib)/sizeof(mib[0]);
unsigned freq;
size_t freq_len = sizeof(freq);
if (sysctl(mib, namelen, &freq, &freq_len, nullptr, 0) != 0)
return 0;
return freq;
}
static
bool
check_invariants()
{
static_assert(1 == period::num, "period must be 1/freq");
assert(get_clock_speed() == period::den);
static_assert(std::is_same<rep, duration::rep>::value,
"rep and duration::rep must be the same type");
static_assert(std::is_same<period, duration::period>::value,
"period and duration::period must be the same type");
static_assert(std::is_same<duration, time_point::duration>::value,
"duration and time_point::duration must be the same type");
return true;
}
static const bool invariants;
};
const bool clock::invariants = clock::check_invariants();
So it isn't portable. But if you want to experiment with a high resolution clock on your own intel hardware, it doesn't get finer than this. Though be forewarned, today's clock speeds can dynamically change (they aren't really a compile-time constant). And with a multiprocessor machine you can even get time stamps from different processors. But still, experiments on my hardware work fairly well. If you're stuck with millisecond resolution, this could be a workaround.
This clock has a duration in terms of your cpu's clock speed (as you reported it). I.e. for me this clock ticks once every 1/2,800,000,000 of a second. If you want to, you can convert this to nanoseconds (for example) with:
using std::chrono::nanoseconds;
using std::chrono::duration_cast;
auto t0 = clock::now();
auto t1 = clock::now();
nanoseconds ns = duration_cast<nanoseconds>(t1-t0);
The conversion will truncate fractions of a cpu cycle to form the nanosecond. Other rounding modes are possible, but that's a different topic.
For me this will return a duration as low as 18 clock ticks, which truncates to 6 nanoseconds.
I've added some "invariant checking" to the above clock, the most important of which is checking that the clock::period
is correct for the machine. Again, this is not portable code, but if you're using this clock, you've already committed to that. The private get_clock_speed()
function shown here gets the maximum cpu frequency on OS X, and that should be the same number as the constant denominator of clock::period
.
Adding this will save you a little debugging time when you port this code to your new machine and forget to update the clock::period
to the speed of your new machine. All of the checking is done either at compile-time or at program startup time. So it won't impact the performance of clock::now()
in the least.

- 206,506
- 52
- 449
- 577
-
2In Visual Studio 11, the shortest non-zero interval for `high_resolution_clock` is several milliseconds, unfortunately. – Petter Jan 19 '12 at 22:50
-
6It took a few seconds for that sink in for me ... millions of nanoseconds on a platform where the clock speed is a fraction of a nanosecond. Wow!!! I was hoping to see platforms where fractions of a nanosecond would be measurable. I thought my results of several tens of nanoseconds not that impressive. – Howard Hinnant Jan 20 '12 at 03:57
-
Yes, I have actually submitted this as a bug for Visual Studio 11. However, since it is not critical, I don't expect it to be fixed in this version. – Petter Jan 20 '12 at 11:01
-
@HowardHinnant Maybe off-topic, but are you considering to propose your io extension for standardisation in a future technical report? – authchir Jul 19 '12 at 17:24
-
@authchir: Yes, I hope to get it into the mailing this Fall. I've been meaning to do that for a year now. – Howard Hinnant Jul 19 '12 at 23:14
-
As a heads up for somebody looking for a solution. I implemented this in VS with __rdtsc() and the method showed significant deviation that was beyond the visible deviation of my cpu clock frequency. This is probably not the way to go. – Mikhail Jan 21 '13 at 10:59
-
@Mikhail on an i5-3570K @ 3.4Ghz in a tight loop I see times from 6-17 nanoseconds using a clock that uses __rdtsc() in VS2012, where high_resolution_clock is giving me very very close to 1 millisecond between `now` calls. – David Feb 17 '13 at 23:40
-
3Is anyone aware of a way to get cpu frequency in compile time? Also... can't cpu frequency vary in run time these days, with turbo modes and whatnot? perhaps that invalidates this approach as viable? I do need a decent timer in VS11 though, ugh. – David Feb 18 '13 at 00:01
-
3@Dave: Yes, cpu frequency can vary dynamically (I stated this in the answer). My experiments when using this are typically a tight loop around something I'm trying to measure. Such a tight loop, at least for my platform, usually boosts the cpu frequency to its maximum, and that maximum is typically a compile-time constant (read off off of the cpu specification). So for that kind of benchmarking, this can be a valid technique. But obviously this isn't something for general purpose use. It isn't something I'd recommend shipping. Only something for investigation purposes. – Howard Hinnant Feb 18 '13 at 01:52
-
Ah okay. Thanks. I ended up making a new clock which uses QueryPerformanceCounter and QueryPerformanceFrequency for VS11 - from my testing I can get values from 600-1200 nanoseconds in a tight loop. It's acceptable (unlike 1ms) for my use, but 6ns would be better. Oh well. – David Feb 18 '13 at 03:25
-
i've tried to use `nanoseconds ns = duration_cast
(diff);` but on windows i receiver either `0` or `1000000` so it seems actual precision is still `1 millisecond`. – Oleg Vazhnev Apr 30 '13 at 10:15 -
-
@James: I don't know. I'm a `
` expert, but not a x86 expert. I would be grateful to you for sharing your knowledge of why rdtscp would be better, and the correct syntax for using it. – Howard Hinnant Sep 14 '13 at 03:21 -
1You cannot cout directly `std::cout << t2-t1` ! http://stackoverflow.com/a/13824716/496223 – dynamic Nov 26 '13 at 10:44
-
@link: Read my answer again and note the use (and link to) "chrono_io". – Howard Hinnant Nov 26 '13 at 15:36
-
@HowardHinnant According to http://en.wikipedia.org/wiki/Time_Stamp_Counter, RDTSC can be reordered in the CPU, while RDTSCP cannot. – Mooing Duck Jan 31 '14 at 04:15
-
Also, rdtsc is neither monotonic nor steady. Clocks on the CPUs are not guaranteed to be synchronized, and if the system hibernates, then it's reset to zero. – Mooing Duck Jan 31 '14 at 17:30
-
@HowardHinnant why are you using a struct? seems like that would simply confuse people, as structs can't contain methods (in C). – vallentin May 05 '16 at 20:35
-
@Vallentin: No particular reason. If you find this code useful, feel free to make it a `class` and add `public:`. – Howard Hinnant May 05 '16 at 21:17
-
7I'm getting 600-1200 nanoseconds on windows using VS2017, and it appears to be using the high performance timer. So it seems that this issue of 1ms resolution is no longer a problem. – Programmdude Mar 26 '17 at 11:47
For C++03:
Boost.Timer might work, but it depends on the C function clock
and so may not have good enough resolution for you.
Boost.Date_Time includes a ptime
class that's been recommended on Stack Overflow before. See its docs on microsec_clock::local_time
and microsec_clock::universal_time
, but note its caveat that "Win32 systems often do not achieve microsecond resolution via this API."
STLsoft provides, among other things, thin cross-platform (Windows and Linux/Unix) C++ wrappers around OS-specific APIs. Its performance library has several classes that would do what you need. (To make it cross platform, pick a class like performance_counter
that exists in both the winstl
and unixstl
namespaces, then use whichever namespace matches your platform.)
For C++11 and above:
The std::chrono
library has this functionality built in. See this answer by @HowardHinnant for details.

- 1
- 1

- 56,064
- 19
- 146
- 246
-
7Since this is a famous question/answer, an update could be great. Specifically, this could be achieved in a standard and portable way using modern C++ features, like `
` and ` – Manu343726 May 01 '14 at 18:59`? If possible, how?
Matthew Wilson's STLSoft libraries provide several timer types, with congruent interfaces so you can plug-and-play. Amongst the offerings are timers that are low-cost but low-resolution, and ones that are high-resolution but have high-cost. There are also ones for measuring pre-thread times and for measuring per-process times, as well as all that measure elapsed times.
There's an exhaustive article covering it in Dr. Dobb's from some years ago, although it only covers the Windows ones, those defined in the WinSTL sub-project. STLSoft also provides for UNIX timers in the UNIXSTL sub-project, and you can use the "PlatformSTL" one, which includes the UNIX or Windows one as appropriate, as in:
#include <platformstl/performance/performance_counter.hpp>
#include <iostream>
int main()
{
platformstl::performance_counter c;
c.start();
for(int i = 0; i < 1000000000; ++i);
c.stop();
std::cout << "time (s): " << c.get_seconds() << std::endl;
std::cout << "time (ms): " << c.get_milliseconds() << std::endl;
std::cout << "time (us): " << c.get_microseconds() << std::endl;
}
HTH

- 3,481
- 2
- 22
- 30
The StlSoft open source library provides a quite good timer on both windows and linux platforms. If you want it to implement on your own, just have a look at their sources.

- 5,637
- 1
- 23
- 28
The ACE library has portable high resolution timers also.
Doxygen for high res timer:
http://www.dre.vanderbilt.edu/Doxygen/5.7.2/html/ace/a00244.html
I have seen this implemented a few times as closed-source in-house solutions .... which all resorted to #ifdef
solutions around native Windows hi-res timers on the one hand and Linux kernel timers using struct timeval
(see man timeradd
) on the other hand.
You can abstract this and a few Open Source projects have done it -- the last one I looked at was the CoinOR class CoinTimer but there are surely more of them.

- 360,940
- 56
- 644
- 725
-
I decided to go this route. Your link was dead, so I commented with one that's still working: http://www.songho.ca/misc/timer/timer.html – Patrick Jun 21 '17 at 22:13
-
Ahh, nothing like a comment on a eight-year old question :) I have had good luck in the meantime with the [CCTZ](https://github.com/google/cctz) library out of Google which builds on some newer C++11 idioms. – Dirk Eddelbuettel Jun 21 '17 at 22:16
I highly recommend boost::posix_time library for that. It supports timers in various resolutions down to microseconds I believe

- 19,435
- 18
- 63
- 87
SDL2 has an excellent cross-platform high-resolution timer. If however you need sub-millisecond accuracy, I wrote a very small cross-platform timer library here. It is compatible with both C++03 and C++11/higher versions of C++.

- 1,972
- 16
- 25
I found this which looks promising, and is extremely straightforward, not sure if there are any drawbacks:
https://gist.github.com/ForeverZer0/0a4f80fc02b96e19380ebb7a3debbee5
/* ----------------------------------------------------------------------- */
/*
Easy embeddable cross-platform high resolution timer function. For each
platform we select the high resolution timer. You can call the 'ns()'
function in your file after embedding this.
*/
#include <stdint.h>
#if defined(__linux)
# define HAVE_POSIX_TIMER
# include <time.h>
# ifdef CLOCK_MONOTONIC
# define CLOCKID CLOCK_MONOTONIC
# else
# define CLOCKID CLOCK_REALTIME
# endif
#elif defined(__APPLE__)
# define HAVE_MACH_TIMER
# include <mach/mach_time.h>
#elif defined(_WIN32)
# define WIN32_LEAN_AND_MEAN
# include <windows.h>
#endif
static uint64_t ns() {
static uint64_t is_init = 0;
#if defined(__APPLE__)
static mach_timebase_info_data_t info;
if (0 == is_init) {
mach_timebase_info(&info);
is_init = 1;
}
uint64_t now;
now = mach_absolute_time();
now *= info.numer;
now /= info.denom;
return now;
#elif defined(__linux)
static struct timespec linux_rate;
if (0 == is_init) {
clock_getres(CLOCKID, &linux_rate);
is_init = 1;
}
uint64_t now;
struct timespec spec;
clock_gettime(CLOCKID, &spec);
now = spec.tv_sec * 1.0e9 + spec.tv_nsec;
return now;
#elif defined(_WIN32)
static LARGE_INTEGER win_frequency;
if (0 == is_init) {
QueryPerformanceFrequency(&win_frequency);
is_init = 1;
}
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
return (uint64_t) ((1e9 * now.QuadPart) / win_frequency.QuadPart);
#endif
}
/* ----------------------------------------------------------------------- */-------------------------------- */

- 5,104
- 3
- 34
- 51
The first answer to C++ library questions is generally BOOST: http://www.boost.org/doc/libs/1_40_0/libs/timer/timer.htm. Does this do what you want? Probably not but it's a start.
The problem is you want portable and timer functions are not universal in OSes.

- 18,754
- 7
- 41
- 61
STLSoft have a Performance Library, which includes a set of timer classes, some that work for both UNIX and Windows.

- 1,257
- 3
- 12
- 19
I am not sure about your requirement, If you want to calculate time interval please see thread below
Late to the party here, but I'm working in a legacy codebase that can't be upgraded to c++11 yet. Nobody on our team is very skilled in c++, so adding a library like STL is proving difficult (on top of potential concerns others have raised about deployment issues). I really needed an extremely simple cross platform timer that could live by itself without anything beyond bare-bones standard system libraries. Here's what I found:
http://www.songho.ca/misc/timer/timer.html
Reposting the entire source here just so it doesn't get lost if the site ever dies:
//////////////////////////////////////////////////////////////////////////////
// Timer.cpp
// =========
// High Resolution Timer.
// This timer is able to measure the elapsed time with 1 micro-second accuracy
// in both Windows, Linux and Unix system
//
// AUTHOR: Song Ho Ahn (song.ahn@gmail.com) - http://www.songho.ca/misc/timer/timer.html
// CREATED: 2003-01-13
// UPDATED: 2017-03-30
//
// Copyright (c) 2003 Song Ho Ahn
//////////////////////////////////////////////////////////////////////////////
#include "Timer.h"
#include <stdlib.h>
///////////////////////////////////////////////////////////////////////////////
// constructor
///////////////////////////////////////////////////////////////////////////////
Timer::Timer()
{
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceFrequency(&frequency);
startCount.QuadPart = 0;
endCount.QuadPart = 0;
#else
startCount.tv_sec = startCount.tv_usec = 0;
endCount.tv_sec = endCount.tv_usec = 0;
#endif
stopped = 0;
startTimeInMicroSec = 0;
endTimeInMicroSec = 0;
}
///////////////////////////////////////////////////////////////////////////////
// distructor
///////////////////////////////////////////////////////////////////////////////
Timer::~Timer()
{
}
///////////////////////////////////////////////////////////////////////////////
// start timer.
// startCount will be set at this point.
///////////////////////////////////////////////////////////////////////////////
void Timer::start()
{
stopped = 0; // reset stop flag
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceCounter(&startCount);
#else
gettimeofday(&startCount, NULL);
#endif
}
///////////////////////////////////////////////////////////////////////////////
// stop the timer.
// endCount will be set at this point.
///////////////////////////////////////////////////////////////////////////////
void Timer::stop()
{
stopped = 1; // set timer stopped flag
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceCounter(&endCount);
#else
gettimeofday(&endCount, NULL);
#endif
}
///////////////////////////////////////////////////////////////////////////////
// compute elapsed time in micro-second resolution.
// other getElapsedTime will call this first, then convert to correspond resolution.
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInMicroSec()
{
#if defined(WIN32) || defined(_WIN32)
if(!stopped)
QueryPerformanceCounter(&endCount);
startTimeInMicroSec = startCount.QuadPart * (1000000.0 / frequency.QuadPart);
endTimeInMicroSec = endCount.QuadPart * (1000000.0 / frequency.QuadPart);
#else
if(!stopped)
gettimeofday(&endCount, NULL);
startTimeInMicroSec = (startCount.tv_sec * 1000000.0) + startCount.tv_usec;
endTimeInMicroSec = (endCount.tv_sec * 1000000.0) + endCount.tv_usec;
#endif
return endTimeInMicroSec - startTimeInMicroSec;
}
///////////////////////////////////////////////////////////////////////////////
// divide elapsedTimeInMicroSec by 1000
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInMilliSec()
{
return this->getElapsedTimeInMicroSec() * 0.001;
}
///////////////////////////////////////////////////////////////////////////////
// divide elapsedTimeInMicroSec by 1000000
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInSec()
{
return this->getElapsedTimeInMicroSec() * 0.000001;
}
///////////////////////////////////////////////////////////////////////////////
// same as getElapsedTimeInSec()
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTime()
{
return this->getElapsedTimeInSec();
}
and the header file:
//////////////////////////////////////////////////////////////////////////////
// Timer.h
// =======
// High Resolution Timer.
// This timer is able to measure the elapsed time with 1 micro-second accuracy
// in both Windows, Linux and Unix system
//
// AUTHOR: Song Ho Ahn (song.ahn@gmail.com) - http://www.songho.ca/misc/timer/timer.html
// CREATED: 2003-01-13
// UPDATED: 2017-03-30
//
// Copyright (c) 2003 Song Ho Ahn
//////////////////////////////////////////////////////////////////////////////
#ifndef TIMER_H_DEF
#define TIMER_H_DEF
#if defined(WIN32) || defined(_WIN32) // Windows system specific
#include <windows.h>
#else // Unix based system specific
#include <sys/time.h>
#endif
class Timer
{
public:
Timer(); // default constructor
~Timer(); // default destructor
void start(); // start timer
void stop(); // stop the timer
double getElapsedTime(); // get elapsed time in second
double getElapsedTimeInSec(); // get elapsed time in second (same as getElapsedTime)
double getElapsedTimeInMilliSec(); // get elapsed time in milli-second
double getElapsedTimeInMicroSec(); // get elapsed time in micro-second
protected:
private:
double startTimeInMicroSec; // starting time in micro-second
double endTimeInMicroSec; // ending time in micro-second
int stopped; // stop flag
#if defined(WIN32) || defined(_WIN32)
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER startCount; //
LARGE_INTEGER endCount; //
#else
timeval startCount; //
timeval endCount; //
#endif
};
#endif // TIMER_H_DEF

- 1,227
- 14
- 17
If one is using the Qt framework in the project, the best solution is probably to use QElapsedTimer.

- 51,870
- 39
- 111
- 135