0
#include <windows.h>
#include <stdio.h>
#include <stdint.h>

// assuming we return times with microsecond resolution
#define STOPWATCH_TICKS_PER_US  1

uint64_t GetStopWatch()
{
    LARGE_INTEGER t, freq;
    uint64_t val;

    QueryPerformanceCounter(&t);
    QueryPerformanceFrequency(&freq);
    return (uint64_t) (t.QuadPart / (double) freq.QuadPart * 1000000);
}

void task()
{
    printf("hi\n");
}

int main()
{
  uint64_t start = GetStopWatch();
  task();
  uint64_t stop = GetStopWatch();

  printf("Elapsed time (microseconds): %lld\n", stop - start);
}

The above contains a query performance counter function Retrieves the current value of the high-resolution performance counter and query performance frequency function Retrieves the frequency of the high-resolution performance counter. If I am calling the task(); function multiple times then the difference between the start and stop time varies but I should get the same time difference for calling the task function multiple times. could anyone help me to identify the mistake in the above code ??

sachin s
  • 7
  • 1
  • 6
  • This isn't even your code. The code was provided by Ale [here](http://stackoverflow.com/a/19822725/1504523). – Arno Nov 11 '13 at 10:01
  • Ale gave me the above code and tried to modify with my application but I am getting a difference elapsed time for calling the task function multiple times . – sachin s Nov 11 '13 at 10:11

2 Answers2

1

The thing is, Windows is a pre-emptive multi-tasking operating system. What the hell does that mean, you ask?

'Simple' - windows allocates time-slices to each of the running processes in the system. This gives the illusion of dozens or hundreds of processes running in parallel. In reality, you are limited to 2, 4, 8 or perhaps 16 parallel processes in a typical desktop/laptop. An Intel i3 has 2 physical cores, each of which can give the impression of doing two things at once. (But in reality, there's hardware tricks going on that switch the execution between each of the two threads that each core can handle at once) This is in addition to the software context switching that Windows/Linux/MacOSX do.

These time-slices are not guaranteed to be of the same duration each time. You may find the pc does a sync with windows.time to update your clock, you may find that the virus-scanner decides to begin working, or any one of a number of other things. All of these events may occur after your task() function has begun, yet before it ends.

In the DOS days, you'd get very nearly the same result each and every time you timed a single iteration of task(). Though, thanks to TSR programs, you could still find an interrupt was fired and some machine-time stolen during execution.

It is for just these reasons that a more accurate determination of the time a task takes to execute may be calculated by running the task N times, dividing the elapsed time by N to get the time per iteration.

For some functions in the past, I have used values for N as large as 100 million.

EDIT: A short snippet.

LARGE_INTEGER tStart, tEnd;
LARGE_INTEGER tFreq;
double tSecsElapsed;
QueryPerformanceFrequency(&tFreq);
QueryPerformanceCounter(&tStart);

int i, n = 100;
for (i=0; i<n; i++)
{
// Do Something
}

QueryPerformanceCounter(&tEnd);
tSecsElapsed = (tEnd.QuadPart - tStart.QuadPart) / (double)tFreq.QuadPart;
double tMsElapsed = tSecElapsed * 1000;

double tMsPerIteration = tMsElapsed / (double)n;
enhzflep
  • 12,927
  • 2
  • 32
  • 51
  • Could you please help me to do the changes in the above code ?? – sachin s Nov 11 '13 at 10:09
  • Yeah, I just have. I told you what you need to do, so go do it. Copy/Paste coders need to be beaten-up and ostracised, not spoon-fed! If you still are too lazy too think about it, I'll give you this. Put the call to task() inside a for loop with N iterations. When it's done, calculate the iteration time with `iterationTime = (stopTime-startTime)/N;` (and yes, I am in a bad mood, sorry but that's what you get for asking inane questions without so much as a thank-you!) – enhzflep Nov 11 '13 at 10:16
  • Thank you very much !!! I know how to call the task function multiple times in the above code. My Question : I am calling a task function multiple times and its just printing hi every time(its not shown in the above code). Why is it displaying different elapsed time for each time ?? – sachin s Nov 11 '13 at 10:30
  • You're welcome. I really can't see your screen you know, it's a bit hard to speculate... I do wonder if the order of your mathematical operations is causing an overflow. I'm too tired to desk-check that at the moment. But I have supplied some code that I use, that works perfectly well for 100 million iterations (i.e n = 100000000). You'll find the code in my orignal solution, which I've updated. :) – enhzflep Nov 11 '13 at 12:18
  • #include "TIMER1.h" #include "MAIN.h" typedef unsigned int uint16_t; uint16_t count; /** * This task is activated every 2ms. */ void TASK1( ) { LARGE_INTEGER start, stop, freq; double time_len; QueryPerformanceFrequency(&freq); QueryPerformanceCounter(&start); printf("hi\n"); QueryPerformanceCounter(&stop); time_len = (stop.QuadPart - start.QuadPart) / (double) freq.QuadPart; printf("Time per iteration: %0.8f seconds.\n", time_len); } – sachin s Nov 11 '13 at 13:28
  • I am calling the TASK1() function from the main function for every 2ms (which is not in the above code) I run the above code and getting different time_len value for each time. – sachin s Nov 11 '13 at 13:32
  • As far as I know, it should display the same time_len value for multiple times also. Is there any modification to do to get a same value ?? – sachin s Nov 11 '13 at 13:33
  • Again, you can't be absolutly certain of the time-slice given to any program - not even when it's a service. Multimedia timers are more accurate than general purpose ones, but still not 1000% bomb-proof. *NNNNNOOOOO!* It wont display the same elapsed time if your precision is high enough (i.e 0.01 of a day is different to 0.01 of a ms or us. 0.01 units of each period is the smallest you can measure, but they're _very_ different lengths of time) - That's the whole point of running it N times - so you can get a reasonably accurate _average_. That was why I used 100,000,000 - for a stable average – enhzflep Nov 11 '13 at 14:06
  • Thank you . I am using create timer queue function to call the TASK1() function for every 2ms and later using query performance counter to calculate the start time and end time to perform a specific task. Is it right ?? – sachin s Nov 11 '13 at 14:19
  • Create Timer queue function is also an high resolution timer. – sachin s Nov 11 '13 at 14:27
  • Yes, that sounds fine. There will be variation in the times that Task() takes - it's simply a consequence of any modern, non-realtime OS. See Hans' answer - most particularly the part about filtering out unwanted results from your result-set. DO NOT use the average - use the median value. I might have a look at why using the mode-value is no good. – enhzflep Nov 11 '13 at 15:32
1

Code execution time on modern operating systems and processors is very unpredictable. There is no scenario where you can be sure that the elapsed time actually measured the time taken by your code, your program may well have lost the processor to another process while it was executing. The caches used by the processor play a big role, code is always a lot slower when it is executed the first time when the caches do not yet contain the code and data used by the program. The memory bus is very slow compared to the processor.

It gets especially meaningless when you measure a printf() statement. The console window is owned by another process so there's a significant chunk of process interop overhead whose execution time critically depends on the state of that process. You'll suddenly see a huge difference when the console window needs to be scrolled for example. And most of all, there isn't actually anything you can do about making it faster so measuring it is only interesting for curiosity.

Profile only code that you can improve. Take many samples so you can get rid of the outliers. Never pick the lowest measurement, that just creates unrealistic expectations. Don't pick the average either, that is affected to much by the long delays that other processes can incur on your test. The median value is a good choice.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • Gets my vote for pointing-out the median value as having the most utility - after the other three I tried to give it for a clear, concise and comprehensive answer. Thanks for the opportunity to improve my mine. – enhzflep Nov 11 '13 at 15:28
  • @Hans Pasant : thank you very much for the reply: Could you please tell me what is the median value ?? – sachin s Nov 11 '13 at 16:09
  • Sort the samples, take the middle one. The obvious google query is "median value", top hits are all good. – Hans Passant Nov 11 '13 at 16:20