1

What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?

Something like:

A = now;
for (blah; blah; blah) {
    Blah();
    if (now - A > 30000)
         break;
}

It should work on Linux, OS X, and Windows.

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

Neil G
  • 32,138
  • 39
  • 156
  • 257
  • added a qt tag to your question as you mentioned QTime – Idan K Jun 03 '09 at 17:55
  • 2
    Do you mean "any time after 30 ms" or "as close to 30 ms as possible"? – Max Lybbert Jun 03 '09 at 17:55
  • 1
    Maybe it would help to tell what you really want to accomplish. – lothar Jun 03 '09 at 17:56
  • 1
    You might want to add what hardware/OS your'e using too. Different hardware platforms provide different precision timing counters. – J. Polfer Jun 03 '09 at 18:02
  • Answered comments in question. – Neil G Jun 03 '09 at 18:12
  • Thanks for the great answers everyone! This is what StackOverflow is for. The top three answers are all solutions: use clock(), use threads, and poll the hardware clock. The first solution is the least work, and is my temporary solution until I can work out a multi-threaded model. (No one talked about boost::microsec_timer or whatever it's called, surprisingly.) – Neil G Jun 04 '09 at 03:16

9 Answers9

11

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.

The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.

The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).

With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.

feedc0de
  • 3,646
  • 8
  • 30
  • 55
Juliano
  • 39,173
  • 13
  • 67
  • 73
  • Upvoted. Thanks, I will give this some serious consideration. Right now, debugging the single-threaded app is a pain, but when it's stable I will probably go for a multi-threaded model. Thanks for detailing the plan for converting this to a multi-core application. – Neil G Jun 03 '09 at 19:25
  • If the poster was say implementing a progress control or something similar, I'd probably use a timer service/callback that checked some shared state. My normal thread would continue processing, and update a marker of some sort, and the callback thread would take care of doing the updating. On windows, you would catch the WM_TIMER message, for example. – Chris K Jun 04 '09 at 03:13
  • @darthcoder In multithread processes you don't update markers, you have to use a monitor or a mutex, or you will risk touching data that is not ready for consumption. Also, you should leave the main thread idle, receiving messages and doing the UI updates, and have a secondary thread (not in event-loop) doing the heavy work. That way your application stays responsive during processing. QTimer does what you proposed with WM_TIMER in a portable way (Neil G said that it had to be portable). – Juliano Jun 04 '09 at 04:01
4

While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.

Ariya Hidayat
  • 12,523
  • 3
  • 46
  • 39
2

The code snippet example in this link pretty much does what you want:

http://www.cplusplus.com/reference/clibrary/ctime/clock/

Adapted from their example:

void runwait ( int seconds )
{
   clock_t endwait;
   endwait = clock () + seconds * CLOCKS_PER_SEC ;
   while (clock() < endwait)
   {
      /* Do stuff while waiting */
   }
}
K M
  • 3,224
  • 3
  • 21
  • 17
  • Is it possible to use the ctime library to approach the granularity (30ms) that he is asking for? – J. Polfer Jun 03 '09 at 18:09
  • -1, this is busy waiting (the process will suck CPU while doing nothing). The cases where busy waiting is acceptable are very specific (in that case, it may be acceptable for 30 ms if it's the only solution, but not for a whole second). – Bastien Léonard Jun 03 '09 at 18:13
  • 2
    True, on a graphical framework this is a no-no (guessing by the Qt reference). If it's just a console app it's probably okay as the OS would take care of it. – J. Polfer Jun 03 '09 at 18:16
  • On my machine, Intel running OS X: #define CLOCKS_PER_SEC (__DARWIN_CLK_TCK) #define __DARWIN_CLK_TCK 100 /* ticks per second */ – Neil G Jun 03 '09 at 18:21
  • I'm not trying to wait, but polling the clock() would work if CLOCKS_PER_SEC is always >= 100 (at least) – Neil G Jun 03 '09 at 19:23
2

Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.

You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.

The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.

On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.

edit: LOL! Someone already posted a similiar snippet on SO: Timer function to provide time in nano seconds using C++

VonC has got the comment with the CPU timer assembly code in it.

Community
  • 1
  • 1
James
  • 3,852
  • 2
  • 19
  • 14
  • I haven't been ignoring your answer -- I've been looking into it. This seems like the best way to get a resolution better than 0.01s. However, it's a lot more work for me. – Neil G Jun 03 '09 at 19:57
  • Using RDTSC directly is a recipe for pain, due to SMP and power management. The OS timing functions can use a sane timer (e.g. HPET) or attempt to work around SMP clock skew (e.g. QPC() w/AMD Processor Driver). – bk1e Jun 04 '09 at 07:09
2

If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()

Community
  • 1
  • 1
SingleNegationElimination
  • 151,563
  • 33
  • 264
  • 304
2

According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:

  • Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
  • Seems more like running Blah() would be done better by a timer callback.
  • It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
  • Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.

Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).

J. Polfer
  • 12,251
  • 10
  • 54
  • 83
  • Thanks for your follow-ups. Blah() is the simulation update code--not the UI update code. It will almost surely not take longer than 30ms. If it does, I can add clock polling code inside it, assuming the clock-polling code is fast. If the atomic operations inside Blah() take longer than 30ms, then tough-luck user. The UI update code is called automatically by QT when the loop exits and the function returns. The function is called on a timer callback as you suggest. I don't think I have any IPC at all, but I could be wrong. – Neil G Jun 03 '09 at 19:05
2

You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.

0

See QueryPerformanceCounter and QueryPerformanceFrequency

gatorfax
  • 1,124
  • 2
  • 7
  • 13
  • Assuming your'e on a Win32 platform; the OP mentioned Qt, so my guess is that OP doesn't/can't use the Win32 API. – J. Polfer Jun 03 '09 at 18:01
  • Yeah, I'd like to stay OS independent if possible. If I find a solution for the three mentioned OSs, I could write the platform-independent time polling code, but I was hoping someone had already done it. – Neil G Jun 03 '09 at 18:14
  • My answer was posted BEFORE the question was updated to specify the OS. Thanks. – gatorfax Jun 03 '09 at 18:25
0

If you are using Qt, here is a simple way to do this:

QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;

You'll need to specify viewPort and redraw(). Then start the timer with t->start().

swongu
  • 2,229
  • 3
  • 19
  • 24
  • I don't think this will interrupt the for loop, since it's all happening on one thread. – Neil G Jun 03 '09 at 19:12
  • I am, however, using a QTimer to call the function that contains the for loop. – Neil G Jun 03 '09 at 19:12
  • Why don't you just use Qt's event loop? Will you be doing anything else inside your for loop? – swongu Jun 03 '09 at 20:08
  • my loop updates the simulation. It is running within the event loop, but it has to return to the event loop periodically-- not too quickly and not too slowly. That's why I'm asking. – Neil G Jun 04 '09 at 03:11