11

I am writing a C++/SDL/OpenGL application, and I have had the most peculiar bug. The game seemed to be working fine with a simple variable timestep. But then the FPS started behaving strangely. I figured out that both Sleep(1) and SDL_Delay(1) take 15 ms to complete.

Any input into those functions between 0-15 takes 15ms to complete, locking FPS at about 64. If I set it to 16, it takes 30 MS O.O

My loop looks like this:

while (1){
    GLuint t = SDL_GetTicks();
    Sleep(1); //or SDL_Delay(1)
    cout << SDL_GetTicks() - t << endl; //outputs 15
}

It will very rarely take 1ms as it is supposed to, but the majority of the time it takes 15ms.

My OS is windows 8.1. CPU is an intel i7. I am using SDL2.

genpfault
  • 51,148
  • 11
  • 85
  • 139
user3346893
  • 153
  • 1
  • 1
  • 12
  • 4
    possible duplicate of [WinAPI Sleep() function call sleeps for longer than expected](http://stackoverflow.com/questions/9518106/winapi-sleep-function-call-sleeps-for-longer-than-expected) – danielschemmel Apr 24 '14 at 02:24
  • You don't want to put a thread/process to sleep if you expect it to wake up in real-time. Use a spinlock if you don't want to worry about scheduling. – Andon M. Coleman Apr 24 '14 at 02:38

3 Answers3

15

The ticker defaults to 64 hz, or 15.625 ms / tick. You need to change this to 1000hz == 1ms with timeBeginPeriod(1). MSDN article:

http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624(v=vs.85).aspx

If the goal here is to get a fixed frequency sequence, you should use a higher resolution timer, but unfortunately these can only be polled, so a combination of polling and sleep to reduce cpu overhead is needed. Example code, which assumes that a Sleep(1) could take up to almost 2 ms (which does happen with Windows XP, but not with later versions of Windows).

/* code for a thread to run at fixed frequency */
#define FREQ 400                        /* frequency */

typedef unsigned long long UI64;        /* unsigned 64 bit int */

LARGE_INTEGER liPerfFreq;               /* used for frequency */
LARGE_INTEGER liPerfTemp;               /* used for query */
UI64 uFreq = FREQ;                      /* process frequency */
UI64 uOrig;                             /* original tick */
UI64 uWait;                             /* tick rate / freq */
UI64 uRem = 0;                          /* tick rate % freq */
UI64 uPrev;                             /* previous tick based on original tick */
UI64 uDelta;                            /* current tick - previous */
UI64 u2ms;                              /* 2ms of ticks */
#if 0                                   /* for optional error check */
static DWORD dwLateStep = 0;
#endif
    /* get frequency */
    QueryPerformanceFrequency(&liPerfFreq);
    u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);

    /* wait for some event to start this thread code */
    timeBeginPeriod(1);                 /* set period to 1ms */
    Sleep(128);                         /* wait for it to stabilize */

    QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
    uOrig = uPrev = liPerfTemp.QuadPart;

    while(1){
        /* update uWait and uRem based on uRem */
        uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
        uRem  = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
        /* wait for uWait ticks */
        while(1){
            QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
            uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
            if(uDelta >= uWait)
                break;
            if((uWait - uDelta) > u2ms)
                Sleep(1);
        }
        #if 0                    /* optional error check */
        if(uDelta >= (uWait*2))
            dwLateStep += 1;
        #endif
        uPrev += uWait;
        /* fixed frequency code goes here */
        /*  along with some type of break when done */
    }

    timeEndPeriod(1);                   /* restore period */
rcgldr
  • 27,407
  • 3
  • 36
  • 61
  • Since uPrev is based on a calculated number of ticks since an original reading of the timer, there won't be any drift over time using this method as opposed to relying on a delta between current and previous readins of the timer. Since the sleep is setup to allow up to 2 ms delay, the example code should be good for up to about 400hz. If Windows XP support isn't needed, then the delay can assume that Sleep(1) will take about 1 ms (might want to add some margin to this), in which case it should be good for 800hz. – rcgldr Apr 24 '14 at 02:48
  • If the fixed period is an exact multiple of 1ms, and Windows XP support isn't needed, then the multimedia timer event functions could be used. MSDN article: http://msdn.microsoft.com/en-us/library/windows/desktop/dd742877(v=vs.85).aspx . Windows XP is an issue, the ticker will actually run at 1024hz, and 1000hz is simulated by including an extra tick after 42, 42, then 41 ticks so that 128 actual ticks ends up at 125 pseudo ticks (to get back on 1ms boundary). – rcgldr Apr 24 '14 at 02:53
1

Looks like 15 ms is the smallest slice the OS will deliver to you. I'm not sure about your specific framework but sleep usually guarantees a minimal sleep time. (ie. it will sleep for at least 1ms.)

DanielEli
  • 3,393
  • 5
  • 29
  • 36
0

SDL_Delay()/Sleep() cannot be used reliably with times below 10-15 milliseconds. CPU ticks don't register fast enough to detect a 1 ms difference.

See the SDL docs here.

Josh G.
  • 29
  • 5
  • 4
    CPU ticks register more than quick enough, the real problem is what happens when you put a running thread to sleep. It goes to the back of a ready queue, and then it is at the mercy of scheduling. Some frameworks are smart enough to avoid this behavior if the interval is small enough, but apparently this is not one of them. Alternatively, you could set the process priority to real-time, and the process will frequently preempt others before the typical 10-15 ms quantum. – Andon M. Coleman Apr 24 '14 at 02:45