3

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:

I'm getting the deltaTime through this method that is executed each frame:

void Time::calc_deltaTime() {
    double currentFrame = glfwGetTime();
    deltaTime = currentFrame - lastFrame;
    lastFrame = currentFrame;
}

deltaTime is having the value I would expect (around 0.012.... to 0.016...)

And than I'm using deltaTime to delay the frame through the Sleep windows function like this:

void Time::limitToMAXFPS() {

    if(1.0 / MAXFPS > deltaTime)
        Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}

MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)

I also tried this method using while loop:

void Time::limitToMAXFPS() {

    double diff = 1.0 / MAXFPS - deltaTime;

    if(diff > 0) {

        double t = glfwGetTime( );

        while(glfwGetTime( ) - t < diff) { }

    }

}

But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?

  • 2
    How do you calculate your current fps? – SingerOfTheFall Jan 19 '17 at 12:50
  • I would, personally, write the `Sleep` expression like this: `Sleep (1000 / MAXFPS - deltaTime * 1000)`, but that shouldn't behave much differently. – Algirdas Preidžius Jan 19 '17 at 12:51
  • @AlgirdasPreidžius: What if `MAXFPS` is an integer type? – IInspectable Jan 19 '17 at 12:53
  • FWIW, here is some info about Windows Sleep() fcn: http://stackoverflow.com/questions/9518106/winapi-sleep-function-call-sleeps-for-longer-than-expected – Erik Alapää Jan 19 '17 at 12:55
  • @IInspectable That's precisely Why I wrote this way. Since, Typically, `Sleep` takes `int` as an argument. I would, probably even write `static_cast (deltaTime * 1000)`, to force everything to be `int`. But, that's only my opinion. – Algirdas Preidžius Jan 19 '17 at 12:55
  • @SingerOfTheFall How so? – Algirdas Preidžius Jan 19 '17 at 12:55
  • @AlgirdasPreidžius, nevermind, I misread your comment horribly =) – SingerOfTheFall Jan 19 '17 at 12:57
  • 3
    I would suggest you use `std::chrono` and `std::this_thread::sleep_for()` to sleep. However I can't really post a good answer until you show us how do you measure your fps, since there might be an error there – SingerOfTheFall Jan 19 '17 at 12:58
  • 1
    why do you want to sleep? opengl has a "wait for vsync" option which could be a better alternative. – Sven Nilsson Jan 19 '17 at 13:04
  • @SvenNilsson, while synchronizing your FPS to monitor's refresh rate is an easy way to limit it, it is usually better to implement your own FPS lock since you can set it to an arbitrary amount of fps – SingerOfTheFall Jan 19 '17 at 13:05
  • @AlgirdasPreidžius: Do you understand how integer arithmetic and floating point arithmetic differ? If `MAXFPS` is an integer, the expression `1000 / MAXFPS` could be radically different from the value you'd expect. – IInspectable Jan 19 '17 at 13:07
  • By the way, what OS are you using, OP? `Sleep()` takes milliseconds as it's argument on Windows, but it takes _seconds_ on Linux – SingerOfTheFall Jan 19 '17 at 13:07
  • @SingerOfTheFall: Yeah, but only if you have limited CPU resources and want to avoid loading one core 100% with rendering. – Sven Nilsson Jan 19 '17 at 13:11
  • @SvenNilsson, I have yet to see a CPU with unlimited resources :P Anyway, sometimes you might want to limit your FPS by a value that is lower than your monitor refresh rate (I do not want to discuss whether it's appropriate, I'm just pointing out that it is possible). Anyway, this is not really on-topic. – SingerOfTheFall Jan 19 '17 at 13:15
  • @SingerOfTheFall Yes I'm using Windows. –  Jan 19 '17 at 13:16
  • @IInspectable Yes, I know how they work. I only mentioned what would have been my first attempt. On the second thought, I understand what you mean - depending on `MAX_FPS` setting - that calculation could be inaccurate for up to a second. Guess, that I had too little coffee in the morning :/ – Algirdas Preidžius Jan 19 '17 at 14:25
  • You gotta have to post a more complete sample, so people can recreate the problem. Spinloops (your second solution) are very accurate for this sort of thing, so I guess you're just measuring your FPS wrong – ltjax Jan 19 '17 at 23:13

5 Answers5

5

How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.

Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:

const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;

// approx ~ 16.666 ms

bool running = true;
double lastTime = 0.0;

while( running ) {
    double time = glfwGetTime();
    double deltaTime = time - lastTime;

    if( deltaTime >= maxPeriod ) {
        lastTime = time;
        // code here gets called with max FPS
    }
}

Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.

Jacques Nel
  • 581
  • 2
  • 11
  • 2
    Why does it seem like a bad idea? Why do you think sleep should be avoided? Sleeping saves electricity and reduces the heat of the CPU, so to me it only seems like a good idea. – Ted Klein Bergman Feb 18 '19 at 13:06
  • 2
    This solution is wasting CPU/GPU cycles and it isn't a good thing. I can mention CPU/GPU heating, charge/battery life on mobile devices, electricity and environment impact, user responsiveness to other tasks... It's like driving your car only in 1st gear – ManuelJE May 07 '20 at 10:31
3

Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.

Is Sleep() inaccurate?

Community
  • 1
  • 1
Sven Nilsson
  • 1,861
  • 10
  • 11
1

I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).

I also tried :

std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));

and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.

Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.

So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.

1

I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps

auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
    ++frames;
    auto now = std::chrono::steady_clock::now();
    auto diff = now - start;
    auto end = now + std::chrono::milliseconds(16);
    if(diff >= std::chrono::seconds(1))
    {
        start = now;
        std::cout << "FPS: " << frames << std::endl;
        frames = 0;

    }
    glfwPollEvents();

    processTransition(countit);
    render.TickTok();
    render.RenderBackground();
    render.RenderCovers(countit);

    std::this_thread::sleep_until(end);
    glfwSwapBuffers(window);
}

to add you can easily adjust FPS preference by adjusting end. now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.

Alawi
  • 11
  • 3
-1

Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

doron
  • 27,972
  • 12
  • 65
  • 103
  • 3
    If I am computing correctly without pen and paper, he will call Sleep with an argument around 20-40, not 0. – Erik Alapää Jan 19 '17 at 13:03
  • @doron Yes that is a problem, I was giving to Sleep double while it expects DWORD that is unsigned int. I'm casting it to DWORD like this (DWORD)((1.0 / MAXFPS - deltaTime) * 1000.0) now I'm getting around 64 fps. I think Sven Nilsson is right I'm gonna have to look for an alternative to Sleep. –  Jan 19 '17 at 13:35
  • @MagnoSilva: The usual approach would be using polling the high resolution timers and busy wait until the desired amount of time has passed. In general "sleeping" a process (i.e. yielding its CPU cycles to the rest of the system) will result in a rather coarse time resolution. On modern desktop systems the scheduler timer runs with between 250Hz to 1000Hz, thus an effective timing accuracy of somewhere between 1ms to 5ms. Add to this that your process will not be rescheduled immediately, so you can expect a scheduling granularity of about 10ms. – datenwolf Jan 19 '17 at 15:58