1

To preface, I am on a Unix (linux) system using gcc.

What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.

Here is an example of something I have been working with:

struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;

ftime(&start);

while ( elapsed < duration ) {
    // do a set of tasks
    ftime(&check);
    elapsed += ((check.time - start.time) * 1000)  + (check.millitm - start.millitm);
}

I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?

Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.

I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.

What approach could I take towards this?

Community
  • 1
  • 1
  • 4
    You can arrange for a signal to be sent to your process after a certain time has elapsed via `timer_create()`. – EOF Aug 10 '16 at 21:53
  • 3
    I would expect `elapsed = ...` rather than `elapsed += ..` – chux - Reinstate Monica Aug 10 '16 at 21:56
  • @chux ok yes I didn't mean to include that. My issue with the part about incorrect detection of changing in time through that lil example. – Austin Bodzas Aug 10 '16 at 22:00
  • 1
    "keep everything in the loop as efficient as possible." --> Efficiency can be measure in program speed, program size, data usage, code maintainability. Which "efficiency" is of concern? – chux - Reinstate Monica Aug 10 '16 at 22:25
  • "epoch requires 42 bits" implies you think or know `time_t` is a 32-bit` integer. Many systems use a wider `time_t` to avoid the [Year 2038 problem](https://en.wikipedia.org/wiki/Year_2038_problem). I hope, as it is 2016, your system is not using a 32-bit `time_t`. – chux - Reinstate Monica Aug 10 '16 at 22:29

2 Answers2

3

Code is incorrect calculating elapsed time.

// elapsed += ((check.time - start.time) * 1000)  + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000)  + (check.millitm - start.millitm);

There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].

       struct timeb {
           time_t         time;
           unsigned short millitm;
           short          timezone;
           short          dstflag;
       };

IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."

int64_t timeb_to_ms(struct timeb *t) {
  return (int64_t)t->time * 1000 + t->millitm;
}

struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);

int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;

while (elapsed < duration) {
  // do a set of tasks
  struct timeb check;
  ftime(&check);
  elapsed = timeb_to_ms(&check) - start_ms;
}
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • But how would this fix the issue mentioned by "But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds." – Austin Bodzas Aug 10 '16 at 22:00
  • @Austin Bodzas `millitm` is an `unsgined short` which is promoted to `int` on your platform. 1-999` is -998. `(2-1)*1000 + -998` is 2. If concerned that `unsigned short` may be the same range as `unsigned`, use `((check.time - start.time) * (int64_t)1000) + ((int)check.millitm - (int)start.millitm);` – chux - Reinstate Monica Aug 10 '16 at 22:03
0

If you want efficiency, let the system send you a signal when a timer expires.

Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.

The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.

If you handle the signal, you can longjmp(2) from the handler to another place.

I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).

Below is an example for you:

#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>

static jmp_buf jmpbuf;

void hndlr();
void loop();
int main(){

    /*sisv_signal handlers get reset after a signal is caught and handled*/
    if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
        perror("couldn't set SIGALRM handler");
        return 1;
    }

    /*the handler will jump you back here*/
    setjmp(jmpbuf);

    if(0>alarm(3/*seconds*/)){
        perror("couldn't set alarm");
        return 1;
    }

    loop();

    return 0;
}

void hndlr(){
    puts("Caught SIGALRM");
    puts("RESET");
    longjmp(jmpbuf,1);
}

void loop(){
    int i;
    for(i=0;  ; i++){
        //print each 100-milionth iteration
        if(0==i%100000000){
            printf("%d\n", i);
        }
    }
}

If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.

Community
  • 1
  • 1
Petr Skocik
  • 58,047
  • 6
  • 95
  • 142
  • `man getcontext` *Do not leave the handler using longjmp(3): it is undefined what would happen with contexts. Use siglongjmp(3) or setcontext() instead.* – EOF Aug 11 '16 at 11:52
  • @EOF the manpage says that siglongjmp works like longjmp but also restores the signal mask. I'm not setting a signal mask in this example, though, so I think it shouldn't matter here. Generally, I guess I would use sigaction and siglongjmp, but I think longjmp is OK here. – Petr Skocik Aug 11 '16 at 12:00
  • The problem is that `setjmp/longjmp` were woefully underspecified in C (like `signal` and other functions as well). The result is that different implementations do starkly different things for these functions, and C doesn't guarantee much. Given that `longjmp()` is not on the list of async-safe functions (much like `puts()`), your program exhibits *undefined behavior*. – EOF Aug 11 '16 at 12:05
  • @EOF siglongjmp is async unsafe too. But that shouldn't matter. The piece above will never trigger another longjmp while one is running. The system will never send you two consequitive SIGALRM. And if you do receive two consequitive SIGALRM than if they're very close to each other, the second one will kill you (with sysv_signal anyway). If it doesn't, then you've reestablished the handler which implies the longjmp call finished. If you use sigaction handlers with sigmask changes, you need siglongjmp, though, to restore the original sigmask. – Petr Skocik Aug 11 '16 at 12:26
  • According to [POSIX](http://pubs.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html) *[...]when a signal interrupts an unsafe function and the signal-catching function calls an unsafe function, the behavior is undefined.*, the unsafe function that gets interrupted and the unsafe function in the signal handler don't have to be the same for the behavior to be undefined. – EOF Aug 11 '16 at 12:33
  • Also, the [manpage](http://man7.org/linux/man-pages/man7/signal.7.html) on Linux has this juicy bit: *If a signal interrupts the execution of an unsafe function, and handler either calls an unsafe function or handler terminates via a call to longjmp() or siglongjmp() and the program subsequently calls an unsafe function, then the behavior of the program is undefined.* Given that you don't check the return value of `setjmp()`, the behavior is undefined. – EOF Aug 11 '16 at 12:40
  • @EOF I agree it is undefined. The `loop` function prints. If it didn't, though, I believe that all should be well as `loop()` is otherwise async-safe and `loop()` is the only place where the above function should receive SIGALRM. I don't see what me not checking `setjmp()` has to do with anything. – Petr Skocik Aug 11 '16 at 13:05
  • If the `longjmp()` is reached, you return to the `setjmp()`, and since you don't do anything special there, you continue executing into the `loop()` again, where you call an async-unsafe function again, which is undefined according to the manpage I cited. Few things are safe to do in a signal-handler, chief among them is `_Exit()` or setting a `volatile sig_atomic_t` or lock-free atomic. – EOF Aug 11 '16 at 13:43