What is the best, most accurate timer in C++?
-
6The absolute best timer would probably be some system-specific call on a RTOS running on some embedded device attached to an atomic clock. Is that what you're looking for, or are you willing to settle for something a little less than "the best, most accurate timer"? – James McNellis Apr 02 '11 at 04:27
-
@James And the time taken to travel along the wires, and to be processed/etc is also taken into approximate consideration? (That sentence didn't make any sense...) – Mateen Ulhaq Apr 02 '11 at 04:41
-
Also, such a high resolution timer might eat CPU. – Mateen Ulhaq Apr 02 '11 at 04:42
4 Answers
In C++11 you can portably get to the highest resolution timer with:
#include <iostream>
#include <chrono>
#include "chrono_io"
int main()
{
typedef std::chrono::high_resolution_clock Clock;
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << t2-t1 << '\n';
}
Example output:
74 nanoseconds
"chrono_io" is an extension to ease I/O issues with these new types and is freely available here.
There is also an implementation of <chrono>
available in boost (might still be on tip-of-trunk, not sure it has been released).

- 206,506
- 52
- 449
- 577
-
I tried running the now() function for 300 million times and there are around 100 times that the latency/difference of t1 and t2 are larger than 10 microseconds. Is it because of context switching or something else? Thanks – user1687035 Dec 12 '16 at 05:50
-
@user1687035: Context switching is a likely culprit. It depends on what platform, which compiler (even compiler version), what other processes are on the machine, etc. A general purpose computer is not a great clock, unless it is constantly asking other computers what time it is. ;-) On macOS I get fairly solid timings *unless* time machine is backing up, spotlight is indexing, or mail is fetching. – Howard Hinnant Dec 12 '16 at 14:39
The answer to this is platform-specific. The operating system is responsible for keeping track of timing and consequently, the C++ language itself provides no language constructs or built-in functions for doing this.
However, here are some resources for platform-dependent timers:
- Windows API -
SetTimer
: http://msdn.microsoft.com/en-us/library/ms644906(v=vs.85).aspx - Unix -
setitimer
: http://linux.die.net/man/2/setitimer
A cross-platform solution might be boost::asio::deadline_timer.

- 71,149
- 71
- 256
- 361
-
2Answer relevant for not ridiculously time based critical applications – Guillaume Apr 02 '11 at 04:30
Under windows it would be QueryPerformanceCounter
, though seeing as you didn't specify any conditions it possible to have an external ultra high resolution timer that has a c++ interface for the driver

- 25,836
- 3
- 63
- 101
-
std::chrono::high_resolution_clock and QueryPerformanceCounter give the same result both with a resolution of 0.1 microseconds. And both giving very varying results for each run! – Olle Lindeberg Nov 21 '20 at 15:50
The C++ standard doesn't say a whole lot about time. There are a few features inherited from C via the <ctime>
header.
The function clock
is the only way to get sub-second precision, but precision may be as low as one second (it is defined by the macro CLOCKS_PER_SEC
). Also, it does not measure real time at all, but processor time.
The function time
measures real time, but (usually) only to the nearest second.
To measure real time with subsecond precision, you need a nonstandard library.

- 134,909
- 25
- 265
- 421