12

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.

Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.

deft_code
  • 57,255
  • 29
  • 141
  • 224
Raven
  • 4,783
  • 8
  • 44
  • 75

7 Answers7

12

You have two different ways to solve this problem:

  1. Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display.

    Then You measure the amount of time spent on the last frame (a timer will do)

    Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame.

  2. Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application.

    The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.

Gustavo Muenz
  • 9,278
  • 7
  • 40
  • 42
  • 9
    The first one is only for slowing FPS down and it's dangerous to rely on the accuracy of the sleep interval. – young Jul 21 '10 at 00:19
  • In some simple cases (and my current one) is the 1st solution good enough, because I just need to unite FPS while running app, knowing that my frame rate never dropped under 200 it will be efficient. So maybe a dangerous but simple to get into your code without many changes, it is as good as second.. – Raven Jul 21 '10 at 00:38
  • "Then You measure the amount of time spent on the last frame " how do you know that last frame is finished and its time to measure the delay? – Allahjane Dec 25 '14 at 12:59
  • @Allahjane like the usual: timer = time(); render_frame(); time_spent = time() - timer; – Gustavo Muenz Jan 07 '15 at 22:49
  • Actually I meant the openGL call that tells you the frame draw is complete – Allahjane Jan 08 '15 at 14:33
10

Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.

You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.

The common way to deal with this problem is using elapsed time from previous update. For example,

// Bad
void enemy::update()
{
  position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}

// Good
void enemy::update(elapsedTime)
{
  position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
young
  • 2,163
  • 12
  • 19
  • bassically you are saying same thing as Edison but this code demonstration makes it clear for everyone now I think. Thanks – Raven Jul 21 '10 at 00:34
  • You're welcome~ :) and yeap, mine is the same as the 2nd one in Edison's answer. just wanted to point out weakness of the 1st one. – young Jul 21 '10 at 00:51
5

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?

No.

There is no way to precisely calculate how many updates should be called to reach desired framerate.

However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.

I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.

OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.

Or how else can I prevent FPS to drop or raise dramatically?

You prevent FPS from raising by sleeping.

As for preventing FPS from dropping...

It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.

SigTerm
  • 26,089
  • 6
  • 66
  • 115
  • I have found that even OGL have it's own timer like this: glutTimerFunc(40,Timer,0); Anyway, when I was talking about windows one, I thought that QueryPerformanceCounter and QueryPerformanceFrequency are avalible only on windows. – Raven Jul 20 '10 at 23:23
  • 3
    @Raven: "glutTimerFunc" This is not an OpenGL function - glut is not a part of OpenGL. "are avalible only on windows" Yes, they're windows only. If you want cross-platform solution, use SDL_GetTicks. – SigTerm Jul 20 '10 at 23:32
  • But if you are using a glut(freeglut in my case), you have still cross-platform solution is that right? Ofcourse there is need of freeglut library then, but glut provides valuable functions (as that timer) in that piece of file... – Raven Jul 20 '10 at 23:57
1

This code may do the job, roughly.

static int redisplay_interval;

void timer(int) {
    glutPostRedisplay();
    glutTimerFunc(redisplay_interval, timer, 0);
}

void setFPS(int fps)
{
    redisplay_interval = 1000 / fps;
    glutTimerFunc(redisplay_interval, timer, 0);
}
leoly
  • 8,468
  • 6
  • 32
  • 33
1

You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).

Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).

These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep" is always perfectly timed instead of an educated guess based on how long the last frame took.

deft_code
  • 57,255
  • 29
  • 141
  • 224
  • 1
    "You're asking the wrong question." It is a quite reasonable game-development question. "Your monitor will only ever display" Yep, but an application can easily produce 600 frames per second. Also, very high framerate is useful for capturing video and slowing it down afterwards. – SigTerm Jul 20 '10 at 23:36
  • 2
    "Instead you should be seeking to lock your fps at 60 or 30.". You definitely shouldn't do that - if you do that game will not function if hardware is not powerful enough. Properly done game should be able to run on any framerate (from 5 to 1000). "wglSwapIntervalEXT" It has very little to do with framerate. – SigTerm Jul 20 '10 at 23:42
  • I was a bit vague. Of course you should gracefully degrade if the system cannot push 30 fps. However, for a better user experience it's better to "lock" your frame rate at something consistent rather than jump everywhere from 20-200. – deft_code Jul 21 '10 at 01:33
  • Also I think we are talking past each other a bit. I'm approaching this from the video game standpoint. With video game it is a common beginner mistake to push the fps as high as possible because bigger is obviously better. With video games it is always better to have a consistent fps, the one exception being when you're testing graphic performance. I'd never considered that there would be a valid reason to push a huge fps other than to show off a game engine. – deft_code Jul 21 '10 at 01:40
  • @Caspin: "I'm approaching this from the video game standpoint." I'm also talking from video game standpoint. Locking fps is unreliable (you'll never get exact value) and should be avoided unless there is some kind of limitation (say, in physics engine), while supporting variable framerate isn't difficult. From my opinion, the proper way is to make framerate variable - measure how much time has passed, update scene accordingly. – SigTerm Jul 21 '10 at 08:34
  • 1
    @Caspin: "valid reason to push a huge fps" I already saw enough heated discussions about this subject, there are two main arguments - with higher fps you'll get smoother control. Even if fps is above monitor refresh rate. Another argument is for making video game videos (say, with fraps) - when framerate is above 200, you can easily make a good slow-motion video from it. – SigTerm Jul 21 '10 at 08:36
  • @Caspin: Speaking of slow-motion, it will be more difficult to do slow-motion movement in game that supports only fixed framerate, variable framerate engine wouldn't care - to change time flow speed you'll need to multiply deltaT, and that's all you need. With fixed framerate you'll have to change number of updates, as a result you'll get jerky movement or extra cpu load (if you want everything to move faster). Also, if you want, it is very easy to convert variable framerate engine to fixed framerate - you'll only need to modify a class that calculates deltaT, and add a bit of sleeping. – SigTerm Jul 21 '10 at 08:41
  • @Caspin: "our monitor will only ever display at 60 fps" assuming that monitor will ever support only that framerate is incorrect. Not long ago, there were a lot of CRT monitors that could support 120hz refresh rate. If a new device appears on the market, and your game will be locked at refresh rate below monitors refresh rate, customer won't be happy. Engine should be able to support as much frames per second as it can, but a user should have an option to enable vsync. I believe this is the end of discussion. – SigTerm Jul 21 '10 at 08:44
  • Alright let me clarify, I only disagree with you on the fps issue. Everything else is best practice for a game development. We agree: game updates(engine/physics/etc) should be independent of frame rate. To run a game in slow motion I would down scale the time passed the the updates. We agree: sampling input faster then 60Hz is a good thing. I personally think that feeding you last frame time as the current deltaT is a bad idea (I don't think you were advocating this, just clarifying). Instead the deltaT should be smoothed in some way in case there was an fps spike. – deft_code Jul 21 '10 at 12:59
  • We agree: a frame rate cannot really be locked. We *disagree*: frame rate should prefer consistency over raw speed. I think we'd agree that the perfect situation (which will never occur) is for the engine to get done with the latest screen update just as the monitor is ready for a new one. When using double-buffering, coordinating render updates with v-sync is the preferred way. When using triple-buffering both techniques work well. By default DirectX sychronizes with v-sync even when triple-buffering. – deft_code Jul 21 '10 at 14:00
  • Raw speed is required for quality slow animation on a network game, as network updates cannot be slowed. Timing updates with v-sync has the advantage of wasting less CPU time (raw speed renders screens that will never be displayed on the monitor). I will argue that consistent monitor updates are important, e.g. a consistent 30 fps is superior to an fps ranging from 55-65 on a 60Hz monitor. That however does not apply to the question at hand (my answer is wrong). When frame rates are consistently larger than the monitors refresh rate, the monitor will still update at a consistent rate. – deft_code Jul 21 '10 at 14:12
  • 1
    -1 because european monitors are identical to monitors in the rest of the world. The 50/60 framerate is something that was true in the bad old days of analog monitors; it's not true for digital displays. – Clearer Apr 13 '15 at 16:00
1

You absolutely do wan't to throttle your frame-rate it all depends on what you got going on in that rendering loop and what your application does. Especially with it's Physics/Network related. Or if your doing any type of graphics processing with an out side toolkit (Cairo, QPainter, Skia, AGG, ...) unless you want out of sync results or 100% cpu usage.

zester
  • 49
  • 3
0

Here is a similar question, with my answer and worked example

I also like deft_code's answer, and will be looking into adding what he suggests to my solution.

The crucial part of my answer is:

If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.

The example is for animation code that renders at the same speed regardless of whether benchmarking mode, or fixed FPS mode, is active. An animation triggered before the change even keeps a constant speed after the change.

Community
  • 1
  • 1
Thomas Poole
  • 302
  • 4
  • 10