0

I am working on a project which has no determined algorithm to solve using C language. I am Using Monte Carlo technique for solving that problem. And the number of random guesses I want to limit to the execution time specified by the user. This means I want to make full use of the execution time limit defined by the user (as a command line argument) to make as many random iterations as possible. Can I check the execution time elapsed so far for a loop condition?

for(trials=0;execution_time<specified_time;trials++)

If so, how do I do it? Or if there is any other way also, it is welcomed. Thank You.

P.S. I am using Code Blocks 10.05 for coding and GNU compiler.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
rrs90
  • 3
  • 2
  • see also [How to measure time in milliseconds using ANSI C?](http://stackoverflow.com/questions/361363/how-to-measure-time-in-milliseconds-using-ansi-c) – Nick Dandoulakis Jan 16 '11 at 14:02

3 Answers3

2

You can try standard function clock() which returns number of internal clock ticks since program start. See documentation of that function for more information.

Al Kepp
  • 5,831
  • 2
  • 28
  • 48
  • That usually works. There are some cases (odd, but fairly common) where time goes backwards due to some kind of paravirtualization. At that point, jiffies doesn't always mean jifies, and the resulting clock_t is off by a few (or in bad cases, just -1). Even if you just record time when you start, check periodically then subtract, that kind of jitter is still problematic. So I agree, clock() is the simplest / cheapest way. – Tim Post Jan 16 '11 at 14:18
  • Thank you everybody for your valuable comments. Simply two lines of code did it for me: – rrs90 Jan 18 '11 at 15:00
2

Yes, if you have a sufficiently fine-grained clock on your computer (and you do).

  1. Record the time when the simulation started.

  2. For each trip around the loop, find the current time and the corresponding delta between the start time and now. If the value is bigger than the limit, stop.

If you use time() with its one-second granularity, beware of the quantization effects. If the user said '1 second', you could end up running for a tiny fraction of a second if your program started at T=N.999s and you stopped at T=(N+1).001s. The same effect is possible with any quantum, but since microseconds and nanoseconds are the normal sub-second granularities, the size of the quantum ceases to be a problem.

The high-resolution clock functions I know of are:

  • clock_gettime() - POSIX (nanosecond)
  • gettimeofday() - POSIX (microsecond)
  • times() - Unix System V(CLK_TCK per second)
  • ftime() - Ancient Unix (millisecond)
  • clock() - ISO C - (CLOCKS_PER_SEC)
  • time() - ISO C - (second)
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • Thank you everybody for your valuable comments. Just a two line code did the job for me: – rrs90 Jan 18 '11 at 15:05
0

Thank you everybody for your comments. A simple two line code did the job for me:

time_t start_time = time(NULL);
while (((int)(time(NULL) - start_time)) < execution_time)
{
   /* ...... */
}
Niklas Hansson
  • 503
  • 2
  • 16
rrs90
  • 3
  • 2