5

A colleague and I were looking over a Visual Studio Profiling report in VS2012 and they asked me, "Why would you use percentages to express duration of time in a method or time spent calling the method?"

My explanation was that the tool is providing some representation of what methods/calls take a long time or what parts of a method take a long time. Now that can be an abstraction (percent) or something absolute (time(ms)), but either is enough to point you to the problem areas in your application.

We weren't especially convinced by that, so I thought I'd ask the internet.

Steve Duitsman
  • 2,749
  • 5
  • 27
  • 39

5 Answers5

7

This is Andre Hamilton from the Visual Studio Profiler team. The reason the values are in percentages and not in ms is because you are seeing a report based on Sample Profiling and not Instrumentation based profiling.

Sample Profiling Basically the Operating system will periodically do a stack walk. The results you see in the profiling report represent what fraction of the time when the OS did a stack walk that a particular function was on the stack

Instrumentation Profiling We basically modify the binary(either statically or dynamically) and intercept the begin and end of functions. We then take a timestamp when we start and exit the function. This will give you precise information about the functions execution, but it doesn't come without cost. Because information is taken at every function enter&exit, the resulting profiling report can be very huge(it is not unkown to have over 1GB of data with just a few seconds of program execution). Also if static instrumentation is used on signed binaries, you will need to have them resigned. This could complicate the development process. Dynamic instrumentation helps here, but this does not save you from the data overhead. Unless you are specifically looking for timing information Sampling is really the way to go as expressed b the other posters.

FYI Visual Studio comes with vsinstr for static instrumentation (found in \Team Tools\Performance Tools).

Andre Hamilton
  • 236
  • 1
  • 3
  • As an exercise profile a JavaScript Windows Store application as well as a C# one. You will notice that the JavaScript ones include ms information, that is because JavaScript profiling in VS2012 is done via instrumentation, whereas Managed Windows Store applications are sampled – Andre Hamilton Apr 14 '13 at 07:01
2

Actually, some profilers do give absolute time in addition to percentage.

The real question is how useful these timings are, considering the fact that you could get different timings depending on things like the current machine load and the specifications of the current machine. Also, remember that when you run code in a profiler it will run more slowly than an unprofiled run, so even the profiled run doesn't accurately reflect the true running time.

For these reasons, some may consider the absolute time irrelevant. If you then assume that changes between timings are by a multiplication by some number, the percentage would be the quantity to look at. The percentage will then conserve ratios between absolute times, so if something takes twice the time, it will have twice the percentage.

Of course the percentage is not perfect, since there is no guarantee that changes will be multiplicative (overhead, for example, would be additive).

Bitwise
  • 7,577
  • 6
  • 33
  • 50
2

What's the goal?

  1. Just getting time measurements you can put on a powerpoint? or...

  2. Finding out how to make the whole thing take less time? (Other than just running it on a faster chip.)

If the goal is (2), then the thing to do is find activities within the software that a) account for a large percent of wall-clock time, and b) aren't strictly necessary. The reason is if you can get rid of an activity taking fraction X (like 50%) of time, then the speedup factor you get is up to 1/(1-X) or two times.

I'm being careful to use the word "activity" here, because it's a very general concept. If you only think you're looking for "slow routines", you're going to miss big speedup opportunities, and that's what you cannot afford to do, if you actually care about performance.

The key point is that speedup opportunities are like rocks. They come in multiples, and in a range of sizes. If you don't remove every one of them you're going to be living with the ones you didn't get. For example, if there are three of them, and when removed they save 50%, 25%, and 12.5%, then if you do all three you get a speedup of 8x. Pretty good. But, if you miss a single one of them, you don't get anywhere near that.

Profilers are supposed to be rock-finders, but if they miss one, how are you going to know? If the output of the profiler is impressive-looking, but doesn't seem to suggest much you could actually fix, does that mean there is none? Nope. More on all that.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
1

The time in miliseconds will vary based on many factors - your development machine may have four processors and 32gb of RAM - but the user machine may only be single core and 1gb RAM.

What will be consistent (mostly1) are "the bits that take the longest" - so the percentage helps you to identify the slowest parts of your code, which are the parts you can gain the most time back from by optimising.

1 notwithstanding how a compiler may optimise code based on processor.

Fenton
  • 241,084
  • 71
  • 387
  • 401
0

Percentage is good. But there is a need to have time in milliseconds. If you have to compare two versions of software doing the same task, one taking much longer than the other. The percentages are harder to compare than the absolute time spent in each function. Why not giving us a choice of seeing the percentages of absolute time?

I am very surprised this hasn't been brought up already.