I was running a cpp code , but one thing i noticed that on windows 7, CLOCKS_PER_SEC in C++ code gives 1000 while on linux fedora 16 it gives 1000000. Can anyone justify this behaviour?
-
2it depends on clock() implementation on your OS, see this question for more info http://stackoverflow.com/questions/588307/c-obtaining-milliseconds-time-on-linux-clock-doesnt-seem-to-work-properl – kiranputtur Sep 03 '12 at 08:16
-
1Easy: if it didn't vary between implementations, the constant wouldn't be necessary. It exists because it is up to the implementation what kind of timer resolution to provide under this API. And Windows goes for 1000 ticks per second – jalf Sep 03 '12 at 08:17
2 Answers
What's to justify? CLOCKS_PER_SEC
is implementation defined, and can
be anything. All it indicates it the units returned by the function
clock()
. It doesn't even indicate the resolution of clock()
: Posix
requires it to be 1000000, regardless of the actual resolution. If
Windows is returning 1000, that's probably not the actual resolution
either. (I find that my Linux box has a resolution of 10ms, and my Windows box 15ms.)

- 150,581
- 18
- 184
- 329
-
ok.. so clock() function has nothing to do with the clock speed of the processor and its just to compute the time taken by the process. Am i right? – Akashdeep Saluja Sep 03 '12 at 09:11
-
@AkashdeepSaluja ...to compute the _CPU time_ taken by the process, not the real time. Cf. the great `sleep` example [here](http://stackoverflow.com/a/647613/777186). – jogojapan Sep 03 '12 at 09:13
-
@AkashdeepSaluja Right. `clock()` is sort of a primitive benchmarking tool. It returns arbitrary values (but on the systems I've used, the first call always returns 0). The difference between two calls returns the CPU time used between the two calls, measured in 1 second/`CLOCKS_PER_SEC` units. (Note however that under Windows, it will return elapsed time, rather than CPU time.) – James Kanze Sep 03 '12 at 09:34
Basically the implementation of the clock()
function has some leeway for different operating systems. On Linux Fedora, the clock ticks faster. It ticks 1 million times a second.
This clock tick is distinct from the clock rate of your CPU, on a different layer of abstraction. Windows tries to make the number of clock ticks equal to the number of milliseconds.
This macro expands to an expression representing the number of clock ticks in a second, as returned by the function clock.
Dividing a count of clock ticks by this expression yields the number of seconds.
CLK_TCK is an obsolete alias of this macro.
Reference: http://www.cplusplus.com/reference/clibrary/ctime/CLOCKS_PER_SEC/
You should also know that the Windows implementation is not for true real-time applications. The 1000 tick clock is derived by dividing a hardware clock by a power of 2. That means that they actually get a 1024 tick clock. To convert it to a 1000 tick clock, Windows will skip certain ticks, meaning some ticks are slower than others!
A separate hardware clock (not the CPU clock) is normally used for timing. Reference: http://en.wikipedia.org/wiki/Real-time_clock

- 12,225
- 10
- 51
- 61
-
The last paragraph doesn't make sense, honestly. If you divide a 3.000.0000.000 Hz CPU Clock rate by powers of 2, you don't get a 1024 Hz clock. And you'd get yet another result for a 3.1 Ghz CPU. I.e. it just can't work like you explained. – MSalters Sep 03 '12 at 08:35
-
1Plus many CPUs don't even run at a fixed rate these days with power saving mechanisms so real time clocks in general don't count clock cycles any more. – jcoder Sep 03 '12 at 08:37
-
I am a bit confused, if the CLOCKS_PER_SEC is different from the actual CPU clocks, then what exactly it gives? – Akashdeep Saluja Sep 03 '12 at 08:37
-
sorry, @MSalters, I made a mistake, it actually uses a separate hardware clock, I updated the answer. – ronalchn Sep 03 '12 at 08:46
-
@AkashdeepSaluja An arbitrary value, which defines the units returned by `clock()`. That's really all you can say about it (except that `clock()` doesn't work under Windows---it's supposed to return the CPU time.) – James Kanze Sep 03 '12 at 09:01
-
@JamesKanze thanks i got it, but another doubt strucked in my mind i.e. if i run a program with the same input many times the clock functions return different time which varies from 1%-2%. if the clock() returns the CPU time and does not include any waiting time for the process than why there is a difference? – Akashdeep Saluja Sep 03 '12 at 12:24
-
All sorts of possibilities - caching? Cache-misses are expensive. Also first time you run a program is probably going to take longer to load the libraries and stuff – ronalchn Sep 03 '12 at 12:29
-
@AkashdeepSaluja First, are you running it under Windows? If so: Windows `clock()` is broken, and returns elapsed time. Otherwise: there are always issues of cache misses or hits, etc. which will affect the CPU time, as well as pipeline issues. These depend on what else is happening in the processor at the same time, but will be charged to the process. – James Kanze Sep 03 '12 at 12:54