36

I'm currently using a explicit cast to unsigned long long and using %llu to print it, but since size_t has the %z specifier, why doesn't clock_t have one?

There isn't even a macro for it. Maybe I can assume that on an x64 system (OS and CPU) size_t is 8 bytes in length (and even in this case, they have provided %z), but what about clock_t?

David Cain
  • 16,484
  • 14
  • 65
  • 75
Spidey
  • 2,508
  • 2
  • 28
  • 38

5 Answers5

33

There seems to be no perfect way. The root of the problem is that clock_t can be either integer or floating point.

clock_t can be a floating point type

As Bastien Léonard mentions for POSIX (go upvote him), C99 N1256 draft 7.23.1/3 also says that:

[clock_t is] arithmetic types capable of representing times

and 6.2.5/18:

Integer and floating types are collectively called arithmetic types.

and the standard defines arithmetic type as either integers or floating point types.

If you will divide by CLOCKS_PER_SEC, use long double

The return value of clock() is implementation defined, and the only way to get standard meaning out of it is to divide by CLOCKS_PER_SEC to find the number of seconds:

clock_t t0 = clock();
/* Work. */
clock_t t1 = clock();
printf("%Lf", (long double)(t1 - t0));

This is good enough, although not perfect, for the two following reasons:

  • there seems to be no analogue to intmax_t for floating point types: How to get the largest precision floating point data type of implemenation and its printf specifier? So if a larger floating point type comes out tomorrow, it could be used and break your implementation.

  • if clock_t is an integer, the cast to float is well defined to use the nearest float possible. You may lose precision, but it would not matter much compared to the absolute value, and would only happen for huge amounts of time, e.g. long int in x86 is the 80-bit float with 64-bit significant, which is millions of years in seconds.

Go upvote lemonad who said something similar.

If you suppose it is an integer, use %ju and uintmax_t

Although unsigned long long is currently the largest standard integer type possible:

so it is best to typecast to the largest unsigned integer type possible:

#include <stdint.h>

printf("%ju", (uintmax_t)(clock_t)1);

uintmax_t is guaranteed to have the size of the largest possible integer size on the machine.

uintmax_t and its printf specifier %ju were introduced in c99 and gcc for example implements them.

As a bonus, this solves once and for all the question of how to reliably printf integer types (which is unfortunately not the necessarily the case for clock_t).

What could go wrong if it was a double:

  • if too large to fit into the integer, undefined behavior
  • much smaller than 1, will get rounded to 0 and you won't see anything

Since those consequences are much harsher than the integer to float conversion, using float is likely a better idea.

On glibc 2.21 it is an integer

The manual says that using double is a better idea:

On GNU/Linux and GNU/Hurd systems, clock_t is equivalent to long int and CLOCKS_PER_SEC is an integer value. But in other systems, both clock_t and the macro CLOCKS_PER_SEC can be either integer or floating-point types. Casting CPU time values to double, as in the example above, makes sure that operations such as arithmetic and printing work properly and consistently no matter what the underlying representation is.

In glibc 2.21:

See also

Community
  • 1
  • 1
Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
  • 3
    `%ju`? this prints exactly `ju` – Victor Aug 03 '13 at 08:37
  • @Victor: Are you compiling with `gcc -std=c99` (or telling your compiler to use c99)? What is your compiler version? I have just tested it and the following works for me under `gcc --version` equals `gcc (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3`: `printf( "printf uintmax_t = %ju\n", (uintmax_t)1 );`. – Ciro Santilli OurBigBook.com Aug 03 '13 at 09:40
  • I am using Visual Studio 2010 :) – Victor Aug 03 '13 at 13:30
  • It seems that [c99 is not supported by VS2010](http://stackoverflow.com/questions/6688895/does-microsoft-visual-studio-2010-support-c99). Also read that MS has no plans for implementing it on their compilers (correct me if wrong). I'd stick with C++ for Windows programming. – Ciro Santilli OurBigBook.com Aug 03 '13 at 13:47
  • On Windows, mingw-gcc uses the Microsoft library, and has thus the same limitation: it does not recognize the newer format specifiers. Still true in may 2015. –  May 12 '15 at 11:08
  • 1
    `unsigned long long int` is *not* the largest possible integer type. Platforms may provide custom integer types of larger size. – fuz May 19 '15 at 08:43
  • @FUZxxl I didn't know that the C standard explicitly allows, I will search the quote. But in my head I meant "defined by default in the C standard". With that addition, it would be correct? As you say, there are already extensions for 128 in GCC http://stackoverflow.com/questions/5381882/types-bigger-than-long-long-in-c which I did not know about. – Ciro Santilli OurBigBook.com May 19 '15 at 08:47
  • Even then you're not quite correct. Not much is said about the sizes of the types defined in standard header files. For instance, a `size_t` may be larger than a `long long int` just fine. – fuz May 19 '15 at 08:49
  • @FUZxxl thanks, you have taught me something new today! After reading the standard, I have updated the answer to say "_standard_ integer type" which is a term clearly defined on C99, and linked to a more precise explanation at: http://stackoverflow.com/a/30322474/895245 – Ciro Santilli OurBigBook.com May 19 '15 at 10:10
13

As far as I know, the way you're doing is the best. Except that clock_t may be a real type:

time_t and clock_t shall be integer or real-floating types.

http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/types.h.html

Bastien Léonard
  • 60,478
  • 20
  • 78
  • 95
12

It's probably because clock ticks is not a very well-defined unit. You can convert it to seconds and print it as a double:

time_in_seconds = (double)time_in_clock_ticks / (double)CLOCKS_PER_SEC;
printf("%g seconds", seconds);

The CLOCKS_PER_SEC macro expands to an expression representing the number of clock ticks in a second.

lemonad
  • 4,148
  • 26
  • 27
  • I would rather use 1) `long double` as it may have more precision 2) do a single typecast *after* the division: `(long double)(time_in_clock_ticks / CLOCKS_PER_SEC)` to round only once. – Ciro Santilli OurBigBook.com Jun 07 '15 at 08:00
4

The C standard has to accomodate a wide variety of architectures, which makes it impossible to make any further guarantees aside from the fact that the internal clock type is arithmetic.

In most cases, you're interested in time intervals, so I'd convert the difference in clock ticks to milliseconds. An unsigned long is large enough to represent an interval of nearly 50 days even if its 32bit, so it should be large enough for most cases:

clock_t start;
clock_t end;
unsigned long millis = (end - start) * 1000 / CLOCKS_PER_SEC;
Christoph
  • 164,997
  • 36
  • 182
  • 240
  • That's exactly why I can't imagine a macro for clock_t printing format isn't specified in any headers. – Spidey Jul 05 '09 at 03:30
  • @Spidey: but what should the output format be if you can't make any guesses on the representation? Remember, it's not specified if `clock_t` will be an integer or floating point value; if you want to do anything useful, you *have* to relate it to `CLOCKS_PER_SEC`, and that's beyond the domain of `printf()` – Christoph Jul 05 '09 at 03:39
  • @Christoph: since clock_t is not defined, CLOCKS_PER_SEC, afaik, is not either, and it is of the same type as clock_t. Since clock_t doesn't usually make it to production, I don't care what type it is and I won't rely on it to make any parsing. But I see the difference in that size_t is some integer type, and clock_t is not (not always), so we're trapped with explicit casting. – Spidey Jul 06 '09 at 18:08
1

One way is by using the gettimeofday function. One can find the difference using this function:

unsigned long  diff(struct timeval second, struct timeval first)
{
    struct timeval  lapsed;
    struct timezone tzp;
    unsigned long t;

    if (first.tv_usec > second.tv_usec) {
        second.tv_usec += 1000000;
        second.tv_sec--;
    }

    lapsed.tv_usec = second.tv_usec - first.tv_usec;
    lapsed.tv_sec  = second.tv_sec  - first.tv_sec;
    t = lapsed.tv_sec*1000000 + lapsed.tv_usec;

    printf("%lu,%lu - %lu,%lu = %ld,%ld\n",
           second.tv_sec, second.tv_usec,
           first.tv_sec,  first.tv_usec,
           lapsed.tv_sec, lapsed.tv_usec);

    return t;
}
Nisse Engström
  • 4,738
  • 23
  • 27
  • 42
rani
  • 11
  • 1
  • POSIX.1-2008 marks gettimeofday() as obsolete, which can cause needing certain preprocessor directives to be needed to get a compiler to reference them if you also use things like -DXOPEN_SOURCE=600 and so on. – JohnH May 02 '19 at 20:17