6

I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:

public virtual void LimitFrameRate(int fps)
{
  long freq;
  long frame;
  freq = System.Diagnostics.Stopwatch.Frequency;
  frame = System.Diagnostics.Stopwatch.GetTimestamp();
  while ((frame - previousFrame) * fps < freq)
  {
     int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
     System.Threading.Thread.Sleep(sleepTime);
     frame = System.Diagnostics.Stopwatch.GetTimestamp();
  }
  previousFrame = frame;
}

Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.

I have read other articles about the sleep function:

What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:

  1. Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
  2. How to I achieve a generally precise frame rate without hogging the CPU from C# code?
Community
  • 1
  • 1
BlueMonkMN
  • 25,079
  • 9
  • 80
  • 146
  • Do you specifically need to Sleep ? What most games do,that runs on general purpose OS and needs a fixed framerate, is simply busy spin instead of sleeping, or sync on vscan. – nos Jan 29 '11 at 14:40
  • Busy spin over several milliseconds is evil. – CodesInChaos Jan 29 '11 at 14:43
  • @CodeInChaos yes. It's still what you have to do though, unless you're running on a real time OS. – nos Jan 29 '11 at 14:46
  • You can use `timeBeginPeriod(1)` and just call `sleep(1)` in a loop until it's time. While `timeBeginPeriod` is evil too, it's the lesser evil in this case. – CodesInChaos Jan 29 '11 at 14:47

3 Answers3

5

You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.

But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.

Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.

Edit: Based on ideas and comments in this answer, the accepted answer was formulated.

BlueMonkMN
  • 25,079
  • 9
  • 80
  • 146
CodesInChaos
  • 106,488
  • 23
  • 218
  • 262
  • Looks like a good option for some, but this game engine is written in .NET and has had some limited success with cross-platform games so it would be a shame to lose that ability. However, I know Mono maps some Win32 calls to Linux equivalents... if that happened here too, maybe it would be alright. I don't suppose you have handy the C# code necessary to invoke that WinMM function... I suppose I can figure it out with a little effort. – BlueMonkMN Jan 29 '11 at 14:59
  • Maybe I'll try the average over time solution. How much time do you suppose is appropriate... the past 5 frames? less? more? – BlueMonkMN Jan 29 '11 at 15:00
  • I have posted the suggested change at http://gamedev.enigmadream.com/index.php?topic=1525.30 and hopefully the person who could reproduce the problem will be able to test and confirm this fix. – BlueMonkMN Jan 29 '11 at 15:33
  • I just got a chance to test the fix on my laptop, which had a slightly different problem, I think, because it has only 1 CPU core, and sometimes had a hard time keeping up. Averaging the FPS over 5 frames did improve it somewhat. Out of curiosity I also tried averaging over 60 frames and according to the FPS number, it was even better, sometimes reaching and slightly exceeding the FPS target, but I could sense the frame rate varying as I played more than when I used only 5 frames because it was trying to keep up with lost frames from a second ago. Waiting to see how the other test goes. – BlueMonkMN Jan 31 '11 at 11:58
  • How do you try to enforce the target framerate? If you're doing it the wrong way, you'll introduce periodic oscillations into the framerate. – CodesInChaos Jan 31 '11 at 12:41
  • Yeah, that's the problem reported in the forum I linked to above. You can see the code I suggested there... can you suggest a correction to my proposed code here and/or in the linked forum? I just made the code wait until X+Y*Z where X = timestamp from beginning of 5 frames ago, Y = how long each frame should take and Z = 5 of the number of frames being averaged. – BlueMonkMN Jan 31 '11 at 21:04
  • @Blue Yes that code obviously introduces oscillations. Perhaps an exponential decay of history instead of a fixed number of frames exhibits better behavior. – CodesInChaos Feb 01 '11 at 09:00
  • I can't really visualize the problem/solution or how your suggestion fits into it. Did you have a specific correct solution in mind when you suggested it? Was it just an idea, or do you know it works when done in some specifically "correct" way? I'm not clear how this "decay" your talking about affects the math or improves the problem. – BlueMonkMN Feb 01 '11 at 11:56
  • What if I just took a fixed point in time (say, the first frame) and always based my time on that 1 frame. For 60 FPS, on the 60th frame, if 1 second or more has passed, I don't wait. If less has passed, I sleep trying to target 1 second. On the 61st frame, if 1.0167 seconds or more have passed, I don't wait, otherwise I wait trying to target 1.0167 seconds since frame 1. – BlueMonkMN Feb 01 '11 at 20:12
  • Now that I think about it more I think my last comment and your suggestion make sense together. I would take some fixed frame and point in time and calculate the frame count and time from there, but then reset it periodically too in case the game is paused or something gets it out of sync. That would be the decaying history you mentioned, I suppose. I think I understand your suggestion now if it's similar to what I said in my previous comment. I will try it later today if I can remember, and pass it along to the person experiencing the problem. – BlueMonkMN Feb 01 '11 at 20:27
  • It works great! It's by far the best solution I've tried, so I'll be going with this. Store some initial timestamp and set some frame counter to 0, then on every frame, use those to calculate when the timestamp for the end of this frame should be. If the wait time is greater than 0, then go into a loop. Within the loop, if the wait time is greater at least 1 millisecond, use sleep to target how long you want to wait. Stay in the loop until the time has elapsed (whether or not you are sleeping). Reset the frame counter and start timestamp every F frames where F is the target FPS. – BlueMonkMN Feb 04 '11 at 14:14
  • @Blue haloooo see what I mean? Reply to me, and I'll delete this temporary comment. –  Feb 08 '11 at 14:50
  • @Will Didn't see your comment until today when I was browsing old questions. – BlueMonkMN Oct 21 '11 at 12:46
  • @BlueMonkMN I think it's better to post that as another answer. – CodesInChaos Nov 05 '15 at 15:07
2

Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.

Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.

Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.

Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.

So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.

If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.

Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.

Roy T.
  • 9,429
  • 2
  • 48
  • 70
  • Most RTS engines have constant (game simulation) framerate and don't use time-based movement. The modern ones sometimes decouple graphics framerate from simulation framerate. – CodesInChaos Jan 30 '11 at 09:58
  • I'm in the game development scene and have some experience with 2D and 3D games and 2D/3D physics engines, however tbh I don't have experience with RTS-es, could you give a few examples? And what is the advantage of using fixed-timestep in RTS games? When the game is running slow this will cause a lot of problems. (especially in networked games, although that is handled a bit differently). – Roy T. Jan 30 '11 at 11:31
  • This game engine is intended to be simple and flexible because it is a 2-D game creator kit intended to be usable by non-developers. As such, physics and rendering are coupled in a frame-based design. And the sample code for implementing network support is also frame-based. No client sees any movement until all of them have received the frame's movement data from all others. Yes, it's not the most friendly design for bogged-down environments, but in my experience that has not been a problem. Most systems have multi-core CPUs now, and I don't multi-task while playing games anyway. – BlueMonkMN Jan 30 '11 at 13:49
  • Typically RTS use a network code that assumes that if you input the same commands into the game at the same time the game will run identically on all computers of that network game. And this obviously requires a fixed simulation framerate. They use this model mainly because the network traffic doesn't increase for huge unit numbers. I know it works like this in Starcraft, and from what I read Warcraft 3, Starcraft 2, supreme commander and age of empires use the same model. http://www.gamasutra.com/view/feature/3094/1500_archers_on_a_288_network_.php – CodesInChaos Jan 30 '11 at 15:41
  • The main problem with that model is that it easily desynchronizes. One cause of desyncs can be different floating point math on different machines(for example x87 vs SSE). Typical simulation framerates range from 10FPS(supreme commander, but has much higher graphics framerate with extrapolation) to 24 fps in Starcraft. – CodesInChaos Jan 30 '11 at 15:42
  • @CodeInChaos hey thanks, that's a very interesting article. I had no idea that this technique was used. I played a lot of SupCom and especially if one player had a much slower computer the system had some problems (the game time got slower and slower). Thanks for the article. – Roy T. Jan 31 '11 at 07:13
0

Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.

public virtual void LimitFrameRate(int fps)
{
   long freq;
   long frame;
   freq = System.Diagnostics.Stopwatch.Frequency;
   frame = System.Diagnostics.Stopwatch.GetTimestamp();
   while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
   {
      int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
      if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
      frame = System.Diagnostics.Stopwatch.GetTimestamp();
   }
   if (++fpsFrameCount > fps)
   {
      fpsFrameCount = 0;
      fpsStartTime = frame;
   }
}
BlueMonkMN
  • 25,079
  • 9
  • 80
  • 146