1

Hey guys so this is my first question of Stack Overflow so if I've done something wrong my bad.

I have my program which is designed to make precise mouse movements at specific times, and it calculates the timing using a few hard coded variables and a timing function, which is running in microseconds for accuracy. The program works perfectly as intended, and makes the correct movements at the correct timing etc.

Only problem is, that the sleeping function I am using is a hot loop (as in, its a while loop without a sleep), so when the program is executing the movements, it can take up to 20% CPU usage. The context of this is in a game, and can drop FPS in game from 60 down to 30 with lots of stuttering, making the game unplayable. I am still learning c++, so any help is greatly appreciated. Below is some snippets of my code to show what I am trying to explain.

this is where the sleep is called for some context

        void foo(paramenters and stuff not rly important)
        {
            Code is doing a bunch of movement stuff here not important blah blah
           
            //after doing its first movement of many, it calls this sleep function from if statement (Time::Sleep) so it knows how long to sleep before executing the next movement.
            if (repeat_delay - animation > 0) Time::Sleep(repeat_delay - animation, excess);
        }

Now here is the actual sleeping function, which, after using the visual studio performance debugger I can see is using all my resources. All of the parameters in this function are accounted for already, like I said before, the code works perfectly, apart from performance.

#include "Time.hpp"
#include <windows.h>

namespace Time
{
    void Sleep(int64_t sleep_ms, std::chrono::time_point<std::chrono::steady_clock> start) 
    {
        sleep_ms *= 1000;
        auto truncated = (sleep_ms - std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count()) / 1000;
        while (std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count() < sleep_ms)
        {
            if (truncated)
            {
                std::this_thread::sleep_for(std::chrono::milliseconds(truncated));
                truncated = 0;
            }
            /*
            I have attempted putting even a 1 microsecond sleep in here, which brings CPU usage down to 
            0.5% 
            which is great, but my movements slowed right down, and even after attempting to speed up
            the movements manually by altering a the movement functions mouse speed variable, it just 
            makes the movements inaccurate. How can I improve performance here without sacrificing 
            accuracy
            */
        }
    }
}
  • I have no idea about the framework you're using, but _polling_ is going to destroy performance. I'd go for something eventdriven. – Ted Lyngmo Oct 23 '20 at 03:13
  • You mention accuracy, How much timing accuracy do you need? +- 10ms? A normal sleep_for will sleep for APPROXIMATELY the requested time (with a bit of error). Usually this is acceptable, and doesn't chew any CPU I believe. However as @Ted Lyngmo said, an event driven architecture here is the way to go. – g-radam Oct 23 '20 at 03:24
  • Sleeps can get a bit weird. For example, if you sleep for 10 ms on a Windows PC with a standard 64 ticks per second, that 10 ms becomes ~16 ms really fast, and because you wait at LEAST as long, and if that system tick comes at the wrong time, say at the 9ms mark, you may have to wait almost a whole extra tick. – user4581301 Oct 23 '20 at 03:40
  • @g-radam Sorry for the confusion, by accuracy I mean accuracy of my movements, and where my mouse ends up in relation to the previous position. For example, without adding extra sleeps, and letting the program take 20% CPU, my mouse will end up going from position 0,0, to 0.75, 1.25 (as example) Adding the sleeps will make my mouse go slower, and instead also go from 0,0 to 0.68, 1.10 What does event driven architecture mean aswell? – TheManTheGuy69 Oct 23 '20 at 03:46
  • Ah, this is a classic issue regarding floating point errors + loops. Each iteration of your loop, you are adding a change(delta) movement to your current mouse position. Since the 'delta' / floating point number is NOT 100% accurate, your mouse position get moved but with a bit of error. Loop this a thousand times, and you get your results. To fix this, you need to get your desired start / end mouse positions, then every loop / sleep, **SET** your mouse position to a percentage of the total movement. – g-radam Oct 23 '20 at 03:53
  • I should note, yes your CPU usage is an issue due to how you slept, but so is what you described above RE the mouse position error. If your position is becoming inaccurate with the amount of loops you do (or smaller and smaller sleep time), you have problems, and really need to fix this aswell. PS Event driven architecture is where you have a bunch of functions which "do" stuff, eg, one function to click a mouse at x,y, one function to move the mouse to x,y, one function to sleep, etc. Then you create an array of "events" aka "tasks" which are executed in sequence / at specific times. – g-radam Oct 23 '20 at 03:57

1 Answers1

0

Why did you write a sleep function? Just use std::this_thread::sleep_for as it doesn't use any resources and is reasonably accurate.

Its' accuracy might depend on platform. On my Windows 10 PC it is accurate within 1 millisecond which should be suitable for durations over 10ms (= 100fps).

ALX23z
  • 4,456
  • 1
  • 11
  • 18
  • How accurate the `std::this_thread::sleep_for()` is on Windows, depends on whether anything uses multimedia timers. (That need not be your own application but could be as well another like e.g. a web browser or your music player.) FYI: [SO: Unpredictable behavior of std::sleep_for on Windows 10](https://stackoverflow.com/a/54495870/7478597). I would like to add to the linked answer that every [timeBeginPeriod()](https://learn.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod) should be accompanied by a `timeEndPeriod()` as it's modifying global system state. – Scheff's Cat Oct 23 '20 at 05:48
  • Trivia: I came to this knowledge after a collaborator reported that he was able to improve the accuracy of my Visual Sim. appl. when he hears music while working with it... ;-) – Scheff's Cat Oct 23 '20 at 05:51
  • @Scheff: Considering the context of the question (fast-paced game), the fast timer seems likely. – MSalters Oct 23 '20 at 08:40
  • @MSalters This comment addressed the (IMHO) sloppy statement _On my Windows 10 PC it is accurate within 1 millisecond_. I found it worth to mention under what conditions it is. On my Windows 10 PC (and closing explicitly all multi-media apps before), I measured the round about 16 ms which still seem to be the default task switching cycle time of Windows. – Scheff's Cat Oct 23 '20 at 10:11