0

Question regarding clock tick count generated by clock() from <ctime>. (Usage of clock() is covered in other questions)

On my system clock_t is an alias for long whose max value according to my compiler's <climits> is 2147483647.

clock_t delay = clock_t(10) * CLOCKS_PER_SEC;
clock_t start = clock();
while(clock() - start < delay); //Note Semi-colon here makes this null statement
std::cout << clock();

Running this I get roughly 10060. Which is consistent with CLOCKS_PER_SEC being #defined (for my system) as 1000.

So if there's 1000 CLOCKS_PER_SEC then 2147483647 / 1000 = 2147483.647 seconds, which works out to be roughly 24-25 days.

I'm not sure if it's actually defined behavior by C++, but I note that common behavior of exceeding the long limit is to wrap to the negative end.

For example,

long m = long(2147483647);
std::cout << ++m << std::endl;

Would output: -2147483648

So suppose the program had been running for a long time before initializing start, and start happened to be initialized to 2147483647 (max possible long value).

At this point, I'd assume we'd start wrapping on values returned by clock() so getting values such as -2147482649 as we are approaching 2147483647 again. So now my original code would probably take a very long time to complete the loop, much longer than the delay intended.

Is the the actual behavior? Should this style pause only be used for delays less than a certain amount? Is there some other check that ought be made to make this "safe"?

Community
  • 1
  • 1
user17753
  • 3,083
  • 9
  • 35
  • 73

3 Answers3

3

What happens when you overflow a signed integral type is implementation defined, and could be a signal. And yes, this means that clock() can only be used for a fixed length of time after the start of your process, and probably only then if the implementation ensures that the first call will always return 0 (the case on all implementations I know of).

James Kanze
  • 150,581
  • 18
  • 184
  • 329
  • 3
    Formally, the behavior of an overflow of a signed integral type is *undefined* behavior, not implementation defined. The difference is that for the latter, the implementation is required to document what it does. For the former, anything goes. – Pete Becker Aug 13 '12 at 15:26
  • @PeteBecker Ah, yes. It's the conversion of a value that does not fit which is implementation defined (but at least in C, may be an implementation defined signal). I don't know why I got the two mixed up. – James Kanze Aug 13 '12 at 15:36
2

use: GetTickCount64();
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724411(v=vs.85).aspx

kain64b
  • 2,258
  • 2
  • 15
  • 27
  • This answer is less than helpful if the code in question isn't running on Windows, which the question does not state. – timelmer Jun 03 '17 at 04:44
0

If clock_t is defined as a signed type, then yes, that is the expected behavior.

If you're concerned about wrap-around, you could always cast the clock_t value to an unsigned long (which means you're assuming that clock_t is no wider than a long), and store/compare that value instead.

David R Tribble
  • 11,918
  • 5
  • 42
  • 52
  • If you really want to do this safely, C99 and C++11 have uintmax_t, which is a typedef for the largest unsigned type. A slightly less safe approach would be to use unsigned long long; the advantage of uintmax_t is that implementations are allowed to provide larger integral types, and there's a remote possibility that clock_t would use such a larger type. Personally, I'd use unsigned long, as originally suggested. – Pete Becker Aug 13 '12 at 15:40
  • Hey, welcome to StackOverflow, @PeteBecker! Long time no see. – David R Tribble Aug 13 '12 at 16:47
  • Thanks, @Loadmaster; been hanging around newsgroups too long. – Pete Becker Aug 13 '12 at 16:59
  • What's worse is I found that `clock()` returns `-1` if elapsed time is unavailable. Though, `-1` also could represent a "valid" time since `clock_t` is a signed type, and would not be visible in an unsigned cast? – user17753 Aug 13 '12 at 19:13