1

To keep track of performance in our software we measure the duration of calls we are interested in.

for example:

using(var performanceTrack = new PerformanceTracker("pt-1"))
{
    // do some stuff

    CallAnotherMethod();

    using(var anotherPerformanceTrack = new PerformanceTracker("pt-1a"))
    {
       // do stuff

       // .. do something
    }

    using(var anotherPerformanceTrackb = new PerformanceTracker("pt-1b"))
    {
       // do stuff

       // .. do something
    }
    // do more stuff
}

This will result in something like:

pt-1  [----------------------------] 28ms

      [--]                            2ms from another method

pt-1a   [-----------]                11ms

pt-1b                [-------------] 13ms

In the constructor of PerformanceTracker I start a stopwatch. (As far as I know it's the most reliable way to measure a duration.) In the dispose method I stop the stopwatch and save the results to application insights.

I have noticed a lot of fluctation between the results. To solve this I've already done the following:

  • Run in release built, outside of visual studio.
  • Warm up call first, not included in to the statistics.
  • Before every call (total 75 calls) I call the garbage collector.

After this the fluctation is less, but still not very accurate. For example I have run my test set twice. Both times

See here the results in milliseconds.

Avg: 782.946666666667 981.68
Min: 489 vs 513
Max: 2600 vs 4875
stdev: 305.854933523003 vs 652.343471128764
sampleSize: 75 vs 75

Why is the performance measurement with the stopwatch still giving a lot of variation in the results? I found on SO (https://stackoverflow.com/a/16157458/1408786) that I should maybe add the following to my code:

//prevent the JIT Compiler from optimizing Fkt calls away
long seed = Environment.TickCount;

//use the second Core/Processor for the test
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2);

//prevent "Normal" Processes from interrupting Threads
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;

//prevent "Normal" Threads from interrupting this thread
Thread.CurrentThread.Priority = ThreadPriority.Highest;

But the problem is, we have a lot of async code. How can I get a reliable performance track in the code? My aim is to discover performance degradation when for example after a check in a method is 10ms slower than before...

1408786user
  • 1,868
  • 1
  • 21
  • 39
  • 1
    Would it be feasible for you, to do multiple iterations of a function call and then get the average running time? I bet the variations will be almost nonexistent over a couple of thousand runs. – Marco Dec 28 '18 at 13:58
  • How about use a profiler? – FCin Dec 28 '18 at 14:12
  • 2
    I guess look at how https://github.com/dotnet/BenchmarkDotNet does it. –  Dec 28 '18 at 14:51
  • 1
    "the fluctation is less, but still not very accurate" is the correct outcome. You are not on an RT OS. You can't predict the length of a commute in heavy traffic. Just the average. – H H Jan 01 '19 at 07:36
  • Performance testing is hard) The fluctuations can also come from some other background process running at the same time that you are running the test. To make your results more consistent you can try running then on isolated machines with all the updates, services, apps, etc turned off. – buxter Feb 07 '19 at 18:35

0 Answers0