1

I'm assigning the value of unistd.h's clock() to two int types, as follows:

int start_time = clock();
for (i = 0; i < 1000000; i++) {
    printf("%d\n", i+1);
}
int end_time = clock();

However, when I print their values, the actual time elapsed differs from the time displayed. The POSIX standard declares that CLOCKS_PER_SEC must equal one million, assuming that a clock cycle is a microsecond. Is the clock just not going the speed the standard expects, or is my loop causing some weirdness in the calculation?

I'm trying to measure the speed of different operations in a similar fashion, and an inaccurate clock ruins my experiments.

2mac
  • 1,609
  • 5
  • 20
  • 35
  • 4
    How do you know what the actual time elapsed is? (N.B. I would expect the above code to take rather more than a second and to be dominated by I/O costs.) – zwol Jul 14 '14 at 01:51
  • 2
    `clock()` measures CPU time, not real time. Because of multiprocessing and I/O time, CPU time will generally be less than wall clock time. – Barmar Jul 14 '14 at 01:52
  • 1
    If you have `unistd.h` and POSIX compliance, you might want to try [`clock_gettime(CLOCK_MONOTONIC)`](http://linux.die.net/man/3/clock_gettime) instead. – zwol Jul 14 '14 at 01:53
  • It takes several seconds to print from 1 to 1,000,000. The value assigned to `end_time` is usually around 900,000, which in microseconds, means – 2mac Jul 14 '14 at 01:53
  • Most PCs are simply not made to measure time with this accuracy, besides your CPU is sharing time with other processes. In addition, `clock()` value increases so fast it overflows back to 0 every 52 seconds, so this is something you have to keep in mind too. – Havenard Jul 14 '14 at 01:55
  • You should probably clarify if you're expecting to measure wall time or cpu time. If you're interested in wall time then, as others have mentioned, clock is the wrong tool for the job. Without that clarification I don't think you're going to get a meaningful answer. – Retired Ninja Jul 14 '14 at 02:02
  • 1
    [POSIX](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/time.h.html) actually says: _`CLOCKS_PER_SEC` A number used to convert the value returned by the `clock()` function into seconds. The value shall be an expression with type `clock_t`. [XSI] [Option Start] The value of `CLOCKS_PER_SEC` shall be 1 million on XSI-conformant systems. However, it may be variable on other systems, and it should not be assumed that `CLOCKS_PER_SEC` is a compile-time constant. [Option End]_ Your observation is only accurate for systems that are trying to be XSI-conformant. – Jonathan Leffler Jul 14 '14 at 02:12
  • If you want to measure CPU time, then `clock` *is* accurate and your stopwatch isn't. If you want to measure wall time, you can use a monotonic posix clock, but the results will vary wildly depending on CPU and IO load. You will have to carefully control experiment condition, i.e. ensure no other processes consume any significant portion of CPU and IO capacity. – n. m. could be an AI Jul 14 '14 at 03:07

1 Answers1

0

It takes several seconds to print from 1 to 1,000,000. The value assigned to end_time is usually around 900,000, which in microseconds

Your processor is fast. Your I/O is not so much.

When your code is executed, your processor will get the time using hardware and assign it to start_time. Then, it will go through the loop and put 1 million lines in the output buffer. Putting things in output buffer does not mean processor has done displaying them.

That is why you get less than a second to finish the process, but see the output for several seconds.

Edit: Just to clarify, it seems that the phrase "Putting things in output buffer does not mean processor has done displaying them" has introduced confusion. This is written in the perspective of the executing process, and does not mean that processor puts all the output in the output buffer in one go.

What actually happens (as n.m. has pointed out) is this: the clock() actually returns the process time, not the time of the day. And because the output buffer is small and processor has to wait long waits after flushing the buffer, the process time will be significantly smaller than the actual execution time. Hence, by the perspective of the process it looks like execution happens fast but display of output is very slow.

Community
  • 1
  • 1
sampathsris
  • 21,564
  • 12
  • 71
  • 98
  • 3
    This is not an accurate explanation.The buffer normally is much smaller than one million lines. The CPU will need to flush the buffer and wait many times. The correct explanation is that the wait time us not being measured by `clock()`. – n. m. could be an AI Jul 14 '14 at 03:15
  • also there is a `\n` at the end of the printf, so the buffer will be flushed each time – mch Jul 14 '14 at 06:49