36

Does anyone know how to calculate time difference in C++ in milliseconds? I used difftime but it doesn't have enough precision for what I'm trying to measure.

Veger
  • 37,240
  • 11
  • 105
  • 116
Alejo
  • 787
  • 2
  • 9
  • 15

8 Answers8

82

I know this is an old question, but there's an updated answer for C++0x. There is a new header called <chrono> which contains modern time utilities. Example use:

#include <iostream>
#include <thread>
#include <chrono>

int main()
{
    typedef std::chrono::high_resolution_clock Clock;
    typedef std::chrono::milliseconds milliseconds;
    Clock::time_point t0 = Clock::now();
    std::this_thread::sleep_for(milliseconds(50));
    Clock::time_point t1 = Clock::now();
    milliseconds ms = std::chrono::duration_cast<milliseconds>(t1 - t0);
    std::cout << ms.count() << "ms\n";
}

50ms

More information can be found here:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm

There is also now a boost implementation of <chrono>.

Howard Hinnant
  • 206,506
  • 52
  • 449
  • 577
  • Is this accurate for nanoseconds ? I mean you have written a very different approach over [here](http://stackoverflow.com/questions/275004/c-timer-function-to-provide-time-in-nano-seconds) – Chani Jul 06 '14 at 18:14
  • @Wildling: The other approach is in another context, where other answers are using the rdtsc assembly instruction. That answer simply shows how to integrate the rdtsc assembly instruction into a chrono clock. This answer shows how to get a time difference in milliseconds using the chrono facility. The accuracy will be dependent upon the supplied high_resolution_clock. The resolution of this clock is inspectable via `high_resolution_clock::period`. On my system that happens to be nanoseconds. On yours it may be something different. – Howard Hinnant Jul 06 '14 at 18:24
  • I just tried both your ways (this answer and the other one) to profile some code. The results from the `class clock` came out to be half of what the above code shows. Would you know why? – Chani Jul 06 '14 at 18:45
  • It is hard to know without seeing your exact code. However guesses might include: you neglected to convert the clock ticks to a known unit such as nanoseconds. Or perhaps the period you entered for your clock was not an accurate representation of your processor speed. Or perhaps the timed code was so short that you are pushing the lower limits of what can be accurately timed (all clocks have overhead). It is good that you are experimenting with these different clocks. That is a good way to learn about them. – Howard Hinnant Jul 06 '14 at 19:04
  • I just did some more runs and realised the results are actually not very far from each other ! However sometimes the code executes in a very short time. Can you please look at the results : http://pastebin.com/zWGERp3t – Chani Jul 06 '14 at 19:19
  • Looks fine to me. You may need to shut down background processes to get a more stable result. E.g. turn off the music player, email, auto backup process, perhaps reboot to make sure you have a "clean machine." I find that I get fairly stable results on OS X, until "time machine" starts backing up, and then my timings are all over the place. – Howard Hinnant Jul 06 '14 at 19:45
  • Yeah, thought so. Thanks a lot for taking time. – Chani Jul 06 '14 at 19:52
21

You have to use one of the more specific time structures, either timeval (microsecond-resolution) or timespec (nanosecond-resolution), but you can do it manually fairly easily:

#include <time.h>

int diff_ms(timeval t1, timeval t2)
{
    return (((t1.tv_sec - t2.tv_sec) * 1000000) + 
            (t1.tv_usec - t2.tv_usec))/1000;
}

This obviously has some problems with integer overflow if the difference in times is really large (or if you have 16-bit ints), but that's probably not a common case.

Tyler McHenry
  • 74,820
  • 18
  • 121
  • 166
  • I think you ment *1000 not *1000000 – SoapBox Nov 21 '08 at 02:06
  • 3
    You might want to add +500 usec before dividing by 1000 there, so that 999usec is rounded up to 1msec not down to 0msec. – Mr.Ree Nov 21 '08 at 02:29
  • 4
    No, I did mean *1000000. It's doing the calculation in us and then converting to ms at the end. The +500 suggestion is a good one, though. – Tyler McHenry Nov 21 '08 at 14:16
  • 1
    Only 5 years late to the party, but I agree with @SoapBox, you can minimize your overflow issue if you take that multiplication outside of the inner parens and mult by 1000, i.e. make the addition operate on MS. Alternatively we can use the standard timersub, then convert the result tv to MS. – nhed Nov 02 '13 at 14:41
7

if you are using win32 FILETIME is the most accurate that you can get: Contains a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).

So if you want to calculate the difference between two times in milliseconds you do the following:

UINT64 getTime()
{
    SYSTEMTIME st;
    GetSystemTime(&st);

    FILETIME ft;
    SystemTimeToFileTime(&st, &ft);  // converts to file time format
    ULARGE_INTEGER ui;
    ui.LowPart=ft.dwLowDateTime;
    ui.HighPart=ft.dwHighDateTime;

    return ui.QuadPart;
}

int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
    //! Start counting time
    UINT64   start, finish;

    start=getTime();

    //do something...

    //! Stop counting elapsed time
    finish = getTime();

    //now you can calculate the difference any way that you want
    //in seconds:
    _tprintf(_T("Time elapsed executing this code: %.03f seconds."), (((float)(finish-start))/((float)10000))/1000 );
    //or in miliseconds
    _tprintf(_T("Time elapsed executing this code: %I64d seconds."), (finish-start)/10000 );
}
Nuno
  • 1,910
  • 2
  • 21
  • 33
  • +1 for a pure win32 environment. Simple and efficient. And again I learned something. – tfl Apr 06 '13 at 16:51
5

The clock function gives you a millisecond timer, but it's not the greatest. Its real resolution is going to depend on your system. You can try

#include <time.h>

int clo = clock();
//do stuff
cout << (clock() - clo) << endl;

and see how your results are.

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
  • That's pretty typical on Unix and Linux systems. I think it can be as bad as about 50 ms, though. – Bill the Lizard Nov 21 '08 at 02:16
  • 1
    The CLOCKS_PER_SEC macro in tells you how many ticks there are per second. It was classically 50 or 60, giving 20 or 16.7 ms. – Jonathan Leffler Nov 21 '08 at 07:53
  • Actually, CLOCKS_PER_SEC gives you the number of clock_t units per second. For example, you might have 1000 CLOCKS_PER_SEC (clock() returns milliseconds) yet have clock() return multiples of 16 ms. Call clock() in a tight loop and it will return: x, ..., x, x+16, ..., x+16, x+32... on my system – aib Nov 21 '08 at 16:22
2

You can use gettimeofday to get the number of microseconds since epoch. The seconds segment of the value returned by gettimeofday() is the same as that returned by time() and can be cast to a time_t and used in difftime. A millisecond is 1000 microseconds.

After you use difftime, calculate the difference in the microseconds field yourself.

SoapBox
  • 20,457
  • 3
  • 51
  • 87
2

You can get micro and nanosecond precision out of Boost.Date_Time.

Ferruccio
  • 98,941
  • 38
  • 226
  • 299
1

If you're looking to do benchmarking, you might want to see some of the other threads here on SO which discuss the topic.

Also, be sure you understand the difference between accuracy and precision.

Community
  • 1
  • 1
Alastair
  • 4,475
  • 1
  • 26
  • 23
0

I think you will have to use something platform-specific. Hopefully that won't matter? eg. On Windows, look at QueryPerformanceCounter() which will give you something much better than milliseconds.

Xantium
  • 11,201
  • 10
  • 62
  • 89
Peter
  • 7,216
  • 2
  • 34
  • 46