34

do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:

  char buffer[30];
  struct timeval tv;

  time_t curtime;



 gettimeofday(&tv, NULL); 
 curtime=tv.tv_sec;

 strftime(buffer,30,"%m-%d-%Y  %T.",localtime(&curtime));
 printf("%s%ld\n",buffer,tv.tv_usec);

This one is made before computing, second one after. But do you know how to subtracts it?

I need result in miliseconds

gonidelis
  • 885
  • 10
  • 32
Waypoint
  • 17,283
  • 39
  • 116
  • 170
  • 7
    `gettimeofday` actually should not be used to measure the elapsed time. Use `clock_gettime(CLOCK_MONOTONIC)` instead. [Here's why](http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time) – Daniel Kamil Kozar Oct 01 '13 at 07:26
  • 1
    what is 'ld' in printf statement? and also why in some program ===> printf("time = %06lu\n", now.tv_usec); is used? what is 06 within the quotes? – Beginner Dec 06 '13 at 06:48
  • A very nice site with some good reference and sometimes examples but there are others like wikibooks for one. http://www.techonthenet.com/c_language/standard_library_functions/time_h/clock.php – Douglas G. Allen May 28 '15 at 15:14
  • @Beginner...... %ld is a format specifier: for long decimal and %06lu is a format specifier: for long unsigned with 6 leading zeros... You can read up on printf format specifiers here: http://www.cplusplus.com/reference/cstdio/printf/ – NeoH4x0r Sep 15 '15 at 14:23

5 Answers5

59

To subtract timevals:

gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;

This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:

long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • not working, I need miliseconds and it says segmentation fault – Waypoint Mar 19 '11 at 15:01
  • 23
    The segmentation fault has nothing to do with this code. It's happening elsewhere. – R.. GitHub STOP HELPING ICE Mar 19 '11 at 15:07
  • I get segmentation fault only with using this code, nevertheless it is not counting in miliseconds – Waypoint Mar 19 '11 at 15:09
  • 8
    It counts microseconds. Divide by 1000 if you need milliseconds, but milliseconds are generally considered very very poor time resolution. If you get a segmentation fault you need to show your code or use a debugger to find where it's happening yourself, but it definitely cannot happen due to any of the code I suggested. – R.. GitHub STOP HELPING ICE Mar 19 '11 at 15:59
  • Many problems here - The answer above is poor code (uninitialized memory location) and gettimeofday is a function that is also deprecated. – Xofo May 27 '16 at 04:54
  • 1
    @Xofo: What uninitialized memory? Yes gtod is deprecated but people still use it for moderately reasonable reasons. Use `clock_gettime` and divide by 1000 if you prefer. – R.. GitHub STOP HELPING ICE May 27 '16 at 16:04
  • @Waypoint did you figure out your seg fault? Curious because he says "shorter than ~2000 seconds the arithmetic may overflow".. – Max von Hippel May 28 '16 at 16:01
4

The answer offered by @Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.


Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).

The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.

Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.

<<

Xofo
  • 1,256
  • 4
  • 18
  • 33
3

No. gettimeofday should NEVER be used to measure time.

This is causing bugs all over the place. Please don't add more bugs.

Thomas
  • 4,208
  • 2
  • 29
  • 31
  • @ijt what do you mean? What I linked to lists several already, including dropped connections, reboots, wrong data, and hangs. – Thomas Apr 08 '17 at 09:24
  • 1
    "NEVER be used to measure time" is extremely misleading, even in the first line of that link it says, "should only be used to get the current time if the current wall-clock time is actually what you want." Sometimes people want to know how long their function took to run, in real time. "NEVER" is hyperbole. – Eliezer Miron Mar 26 '19 at 19:01
  • 1
    @EliezerMiron no you *never* want to use gettimeofday if you want to know how *long the function took to run*. You *only* want to use gettimeofday if you want to know *when* it ran, which is a completely different question. You should *never* compare or subtract two outputs from gettimeofday. – Thomas Mar 27 '19 at 22:43
  • 1
    If gettimeofday can give you an accurate reading of when the function ran (in wall-clock time), then why wouldn't it be able to compare two times to see how much wall-clock time the function took to run? – Eliezer Miron Apr 01 '19 at 21:18
  • @EliezerMiron gettimeofday has different requirements and a different use case. It's much more important for its output to be correct according to new information (user commands new time to be set, NTP tooling jumps the time) than for time not to go backwards. It's also more important that it tries to stay (slew) within the corrent wallclock time than it is that it goes up by a second per real second. To measure time *ALL* you want is a timer that goes up one second per second, which is almost completely out of scope for gettimeofday. – Thomas Apr 02 '19 at 09:19
  • @EliezerMiron if nothing else if you use gettimeofday() you have to handle the case of "negative time has passed". As the blog post says this is not a theoretical problem, but actually breaks TCP connections, reboots databases, crashes CDNs, and hangs `ping`. – Thomas Apr 02 '19 at 09:21
  • Thanks for explaining. What should be used for measuring function time, then, for debugging purposes? – Eliezer Miron Apr 02 '19 at 23:41
  • 2
    `CLOCK_MONOTONIC`, `CLOCK_MONOTONIC_RAW`, or `CLOCK_BOOTTIME`, depending on if you want "seconds excluding suspend time". "time units", or "seconds, excluding suspend time". – Thomas Apr 04 '19 at 09:58
3

If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:

#include <time.h>

int main()
{
   clock_t start = clock();
   //... do work here
   clock_t end = clock();
   double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
   return 0;
}

hth

Armen Tsirunyan
  • 130,161
  • 59
  • 324
  • 434
  • 6
    This measures cpu time, not real time, and often has very bad resolution. – R.. GitHub STOP HELPING ICE Mar 19 '11 at 14:19
  • 1
    This is good if you want to measure CPU efficiency. If you are measuring something else -- say an I/O operation -- it might be better to use gettimeofday(). – Brian L Mar 19 '11 at 14:20
  • I'm just wondering if he is trying to do benchmark tests. Isn't there a C lib somewhere for that? But of course you're all right. I agree cause I'm learning too. Just please you guys, don't leave off the headers. It's all in the libs we use right? – Douglas G. Allen May 28 '15 at 20:35
0

Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.

Borealid
  • 95,191
  • 9
  • 106
  • 122