I am trying to find a solid method for being able to set exactly how many FPS I want my OpenGL application to render on screen. I can do it to some extent by sleeping for 1000/fps milliseconds but that doesn't take into account the time needed to render. Which is the most consistent way to limit fps to desired amount?
-
What about: http://stackoverflow.com/questions/3294972/setting-max-frames-per-second-in-opengl/3295131#3295131 http://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDAQFjAA&url=http%3A%2F%2Fwww.opengl.org%2Fdiscussion_boards%2Fshowthread.php%2F130329-limit-frames-per-second&ei=JFlpUqT1BKKa0AWjiIHQCg&usg=AFQjCNF0WHoMTIE1ECFRpRvvx2_Hkl19JQ&bvm=bv.55123115,d.d2k http://www.nexcius.net/2012/11/11/printing-and-limiting-fps-using-glut/ ? – user2284570 Oct 24 '13 at 17:41
-
7Sleeping for 1 ms also does not take into account operating system scheduler granularity. On most non-realtime operating systems you cannot reliably put a thread / process to sleep for 1 ms, the best you can do is probably 10-15 ms. So then you wind up using a spinlock, which just sits there wasting CPU cycles. You might as well double-buffer your physics simulation, etc. and run that at a different frequency from rendering in order to keep the CPU doing something useful while simultaneously meeting scheduling deadlines in the rendering portion of your application. – Andon M. Coleman Oct 27 '13 at 00:51
7 Answers
you can sync to vblank by using wglSwapIntervalEXT in opengl. its not nice code, but it does work.
http://www.gamedev.net/topic/360862-wglswapintervalext/#entry3371062
bool WGLExtensionSupported(const char *extension_name) {
PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL;
_wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC)wglGetProcAddress("wglGetExtensionsStringEXT");
if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) {
return false;
}
return true;
}
and
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC)wglGetProcAddress("wglGetSwapIntervalEXT");
}

- 20,289
- 11
- 46
- 76
-
3This is the right way to sync to the display's refresh rate, which is normally what you want. Unfortunately, WGL (and hence wglSwapIntervalEXT) is Windows-specific. There is an equivalent GLX extension though: http://www.opengl.org/registry/specs/EXT/swap_control.txt. And if you're using EGL, there's eglSwapInterval. – dave Feb 20 '13 at 01:53
-
1How is this code not nice? This is literally _the_ way to do it – RecursiveExceptionException Oct 08 '16 at 22:10
Since OpenGL is just a low-level graphics API, you won't find anything like this built into OpenGL directly.
However, I think your logic is a bit flawed. Rather than the following:
- Draw frame
- Wait 1000/fps milliseconds
- Repeat
You should do this:
- Start timer
- Draw frame
- Stop timer
- Wait (1000/fps - (stop - start)) milliseconds
- Repeat
This way if you are only waiting exactly the amount you should be, and you should end up very close to 60 (or whatever you're aiming for) frames per second.

- 14,912
- 10
- 45
- 81
-
I inserted start = GetTickCount() in the beginning of my render function and stop = GetTickCount() in the end. For about 99% of the time, those 2 have the same value so their difference is 0. Does this mean that my scene is rendered in less than a millisecond? Inserting this into my sleep calculation, fps are not closer to the set amount than before. Sleep(1000/fps) & Sleep(1000/fps -(stop-start)) give the same deviation from my desired FPS value. – Tsaras Feb 20 '13 at 12:22
-
1From http://msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx, "The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds." – Andrew Rasmussen Feb 20 '13 at 18:37
-
Check out http://msdn.microsoft.com/en-us/magazine/cc163996.aspx if you really want to build it yourself, otherwise use something like glut or wgl where this will be built in for you. – Andrew Rasmussen Feb 20 '13 at 18:38
-
Because OpenGL is asynchronous, sub-frame timing can be very tricky. The best way is to time the whole frame and take into account the amount you slept for. For example, store `lastTime`, find `currentTime`, `deltaTime = currentTime - lastTime`, `drawingTime = deltaTime - sleptTime`, `sleep(1.0/60.0 - drawingTime)`, `sleptTime = 1.0/60.0 - drawingTime`, `lastTime = currentTime`. – jozxyqk Oct 25 '13 at 19:51
-
measure time after glFlush(); and swap buffers, use more accurate time measurement (QueryPerformanceCounter,RDTSC, or some hi-resolution timer) – Spektre Oct 29 '13 at 07:06
Don't use sleeps. If you do, then the rest of your application must wait for them to finish.
Instead, keep track of how much time has passed and render only when 1000/fps has been met. If the timer hasn't been met, skip it and do other things.
In a single threaded environment it will be difficult to make sure you draw at exactly 1000/fps unless that is absolutely the only thing your doing. A more general and robust way would be to have all your rendering done in a separate thread and launch/run that thread on a timer. This is a much more complex problem, but will get you the closest to what your asking for.
Also, keeping track of how long it takes to issue the rendering would help in adjusting on the fly when to render things.
static unsigned int render_time=0;
now = timegettime();
elapsed_time = last_render_time - now - render_time;
if ( elapsed_time > 1000/fps ){
render(){
start_render = timegettime();
issue rendering commands...
end_render = timegettime();
render_time = end_render - start_render;
}
last_render_time = now;
}

- 838
- 7
- 15
-
If the rest if your application is busy, then you shouldn't be sleeping. If there's nothing more to do, then you should be sleeping. Mentioning threads is a good idea, though can't the rendering thread remain active and use sleep() as the timer? – jozxyqk Oct 30 '13 at 09:14
-
If your are rendering in a separate thread, then using sleep() would be a better option versus running the thread on a timer. – tamato Oct 30 '13 at 19:56
OpenGL itself doesn't have any functionality that allows limiting framerate. Period.
However, on modern GPUs there's a lot of functionality covering framerate, frame prediction, etc etc. There was John Carmack's issue that he pushed to make some functionality for it available. And there's NVidia's adaptive sync.
What does all that mean for you? Leave that up to GPU. Assume that drawing is totally unpredictable (as you should when sticking to OpenGL only), time the events yourself and keep the logic updates (such as physics) separate from drawing. That way users will be able to benefit from all those advanced technologies and you won't have to worry about that anymore.

- 38,596
- 7
- 91
- 135
-
9But then when do you draw? If I leave it up to OpenGL I get 1000+ fps and since my screen only refreshes at 60 fps that is a waste of resources.. – markmnl Sep 27 '15 at 15:07
-
6How in absolute hell did this answer get a bounty? Although the first part is correct: OpenGL _itself_ doesn't have functionality to limit framerate, the second part is completely and utterly wrong. It is very much desirable to enable vsync (note that the user should have the choice to turn it off though). This is done through the OpenGL context api (WGL, GLX, EGL, etc.), via a call to (for example) `wglSwapIntervalEXT`. _Every proper game ever made uses this._ Vendors even added [negative swap intervals](https://www.opengl.org/registry/specs/EXT/wgl_swap_control_tear.txt) later – RecursiveExceptionException Oct 08 '16 at 22:21
-
@itzJanuary Your link is a 404. And I think you misunderstood what I meant by "leave that up to the GPU". The original question asked about timing the render so that the *logic* is processed at appropriate pace. That is a fundamentally flawed approach and that's what I tried to recommend avoiding. An app synced to the screen could behave reasonably well, *until* (unless) the render times would cause it to miss frames. The point is still in making the logic separated from the framerate, regardless of whether any sync is used at all. And I don't discrecommend V-Sync. – Bartek Banachewicz Mar 01 '17 at 11:24
-
Apparently khronos decided to take down the sped, but it's also briefly mentioned [here](https://www.khronos.org/opengl/wiki/Swap_Interval#Adaptive_Vsync). – RecursiveExceptionException Mar 01 '17 at 17:52
-
Making the logic seperate from draw may seem like a reasonable idea but in many cases there's no way to implement this. Seperate threads cause all sorts of synchronization issues (one entity updated, one not). If i'm not wrong, all major game engines have both update and draw in the same loop and the same thread. The logic is called with delta time and may even be called twice a frame should the deltatime get to large to be reasonable. I can't seem to find the a page where different main loop types of the Unreal engine were explained but they were all based on variations of this concept – RecursiveExceptionException Mar 01 '17 at 17:54
-
@itzJanuary I didn't say anything about threads. I just meant sync and timing. – Bartek Banachewicz Mar 02 '17 at 13:55
-
@BartekBanachewicz Well in a single threaded scenario there's no way to seperate draw from logic so logic is neccessarily bound to framerate. – RecursiveExceptionException Mar 04 '17 at 17:05
An easy way is to use GLUT. This code may do the job, roughly.
static int redisplay_interval;
void timer(int) {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void setFPS(int fps)
{
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}

- 8,468
- 6
- 32
- 33
Put this after drawing and the call to swap buffers:
//calculate time taken to render last frame (and assume the next will be similar)
thisTime = getElapsedTimeOfChoice(); //the higher resolution this is the better
deltaTime = thisTime - lastTime;
lastTime = thisTime;
//limit framerate by sleeping. a sleep call is never really that accurate
if (minFrameTime > 0)
{
sleepTime += minFrameTime - deltaTime; //add difference to desired deltaTime
sleepTime = max(sleepTime, 0); //negative sleeping won't make it go faster :(
sleepFunctionOfChoice(sleepTime);
}
If you want 60fps, minFrameTime = 1.0/60.0
(assuming time is in seconds).
This won't give you vsync, but will mean that your app won't be running out of control, which can affect physics calculations (if they're not fixed-step), animation etc. Just remember to process input after the sleep! I've experimented with trying to average frame times but this has worked best so far.
For getElapsedTimeOfChoice()
, I use what's mentioned here, which is
- LINUX:
clock_gettime(CLOCK_MONOTONIC, &ts);
- WINDOWS:
QueryPerformanceCounter
Another idea is to use WaitableTimers (when possible, for instance on Windows)
basic idea:
while (true)
{
SetWaitableTimer(myTimer, desired_frame_duration, ...);
PeekMsg(...)
if (quit....) break;
if (msg)
handle message;
else
{
Render();
SwapBuffers();
}
WaitForSingleObject(myTimer);
}
More info: How to limit fps information

- 9,835
- 5
- 34
- 57