1

I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)

Clock.h

typedef std::chrono::high_resolution_clock::time_point timePt;

class Clock
{
    timePt currentTime;
    timePt lastTime;

public:
    Clock();

    void update();
    uint64_t deltaTime();

};

Clock.cpp

#include "Clock.h"

using namespace std::chrono;

Clock::Clock()
{
    currentTime = high_resolution_clock::now();
    lastTime = currentTime;
}

void Clock::update()
{
    lastTime = currentTime;
    currentTime = high_resolution_clock::now();
}

uint64_t Clock::deltaTime()
{
    microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
    return delta.count();
}

When I try to use Clock like so

Clock clock;

while(1) {
    clock.update();
    uint64_t dt = clock.deltaTime();

    for (int i=0; i < 10000; i++)
    {
        //do something to waste time between updates
        int k = i*dt;
    }
    cout << dt << endl; //time elapsed since last update in microseconds
 }

For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).

My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?

EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.

EDIT2: I tried running the following code that should display high_resolution_clock precision:

template <class Clock>
void display_precision()
{
    typedef std::chrono::duration<double, std::nano> NS;
    NS ns = typename Clock::duration(1);
    std::cout << ns.count() << " ns\n";
}

int main()
{
    display_precision<std::chrono::high_resolution_clock>();
}

For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?

jezzi23
  • 250
  • 4
  • 12
  • Are you using a debug build or release build? – Eric Z Jul 27 '15 at 00:26
  • 1
    `//do something to waste time between updates` is going to be optimised out by the compiler...it doesn't do anything. Replace it with something like `std::this_thread::sleep_for` – user657267 Jul 27 '15 at 00:33
  • It would help to know the compiler and version you're using. Visual Studio 2013 in particular does not have a true high resolution clock, and a delta of 15 ms sounds pretty suspicious since that's generally the resolution on Windows. – Retired Ninja Jul 27 '15 at 00:39
  • @EricZ I'm using debug build. I tried using release build just now and the only difference was a lot more of "0" prints before an "15xxx" number occurs. – jezzi23 Jul 27 '15 at 00:47

2 Answers2

1

What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing. See for example this question.

Community
  • 1
  • 1
Ilya Popov
  • 3,765
  • 1
  • 17
  • 30
  • This is no longer a bug in current versions of Visual Studio. (i.e. VS2015 and later) – Casey Jul 27 '15 at 00:47
  • @Casey Yes, this is why I wrote it in past tense. I just clarified my answer anyway. – Ilya Popov Jul 27 '15 at 00:52
  • I'm using Codeblocks with mingw32 compiler (version 4.7.1 I believe) on a Windows 8 computer. – jezzi23 Jul 27 '15 at 00:52
  • From the answers to the question linked it follows that mingw also uses `typedef system_clock high_resolution_clock;`. There are also some workaround suggestions (using boost, or writing your own replacement). – Ilya Popov Jul 27 '15 at 01:01
  • @IlyaPopov I tried running a code that prints the precision of high_resolution_clock (code is in edited post) and it indicated that the precision is 1 microsecond. It doesn't really make sense to me that in my tests it seems to have a 16 milliseconds precision. (this was also the case for system_clock so it is indeed the same type) – jezzi23 Jul 27 '15 at 01:18
  • What you have measured has nothing to do with real precision of the timer. It is just minimum time interval representable by the type of return value, nothing more. – Ilya Popov Jul 27 '15 at 21:04
0

I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem. Same thing for difftime(finish, start) in time.h which often returns 0.

I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:

auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)

whereas this returns mostly 0:

auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time

difftime does not seem to work either:

time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
PJ127
  • 986
  • 13
  • 23