For QPC, you can call QueryPerformanceFrequency
to get the rate it updates at. Unless you are using time
, you will get more than 0.5s timing accuracy anyway, but clock
isn't all that accurate - quite often 10ms segments [although the apparently CLOCKS_PER_SEC
is standardized at 1 million, making the numbers APPEAR more accurate].
If you do something along these lines, you can figure out how small a gap you can measure [although at REALLY high frequency you may not be able to notice how small, e.g. timestamp counter that updates every clock-cycle, and reading it takes 20-40 clockcycles]:
time_t t, t1;
t = time();
// wait for the next "second" to tick on.
while(t == (t1 = time())) /* do nothing */ ;
clock_t old = 0;
clock_t min_diff = 1000000000;
clock_t start, end;
start = clock();
int count = 0;
while(t1 == time())
{
clock_t c = clock();
if (old != 0 && c != old)
{
count ++;
clock_t diff;
diff = c - old;
if (min_diff > diff) min_diff = diff;
}
old = c;
}
end = clock();
cout << "Clock changed " << count << " times" << endl;
cout << "Smallest differece " << min_diff << " ticks" << endl;
cout << "One second ~= " << end - start << " ticks" << endl;
Obviously, you can apply same principle to other time-sources.
(Not compile-tested, but hopefully not too full of typos and mistakes)
Edit:
So, if you are measuring times in the range of 10 seconds, a timer that runs at 100Hz would give you 1000 "ticks". But it could be 999 or 1001, depending on your luck and you catch it just right/wrong, so that's 2000 ppm there - then the clock input may vary too, but it's much smaller variation ~ 100ppm at most. For Linux, the the clock()
is updated at 100Hz (the actual timer that runs the OS may run at a higher frequency, but clock()
in Linux will update at 100Hz or 10ms intervals [and it only updates when the CPU is being used, so sitting 5 seconds waiting for user input is 0 time].
In windows, clock()
measures the actual time, same as your wrist watch does, not just the CPU being used, so 5 seconds waiting for user input is counted as 5 seconds of time. I'm not sure how accurate it is.
The other problem that you will find is that modern systems are not very good at repeatable timing in general - no matter what you do, the OS, the CPU and the memory all conspire together to make life a misery for getting the same amount of time for two runs. CPU's these days often run with intentionally variable clock (it's allowed to drift about 0.1-0.5%) to reduce electromagnetic radiation for EMC, (electromagnetic compatibility) testing spikes that can "sneak out" of that nicely sealed computer box.
In other words, even if you can get a very standardized clock, your test results will vary up and down a bit, depending on OTHER factors that you can't do anything about...
In summary, unless you are looking for a number to fill into a form that requires you to have a ppm number for your clock accuracy, and it's a government form that you can't NOT fill that information into, I'm not entirely convinced it's very useful to know the accuracy of the timer used to measure the time itself. Because other factors will play AT LEAST as big a role.