By chance, I found out about the existence of the clock_gettime()
function for Linux systems. Since I'm looking for a way to measure execution time of a function, I tried it in the MinGW gcc 8.2.0 version on a Windows 10 64-bit machine:
#include <time.h>
#include <stdio.h>
int main() {
struct timespec tstart, tend;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tstart);
for (int i = 0; i < 100000; ++i);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tend);
printf("It takes %li nanoseconds for 100,000 empty iterations.\n", tend.tv_nsec - tstart.tv_nsec);
return 0;
}
This code snippet compiles without warnings/errors, and there are no runtime failures (at least not written to stdout).
Output:
It takes 0 nanoseconds for 100,000 empty iterations.
Which I don't believe is true.
Can you spot the flaw?
One more thing:
According to the N1570 Committee draft (April 12, 2011) of the ISO/IEC 9899:201x, shouldn't timespec_get()
take the role of clock_gettime()
instead?