3

If I make some changes to my code in the hopes that it will improve performance, what's the best way to tell? Just running my code again I often find that Yes, it improved speed from 20.0 seconds to 19.5 seconds, but then I run it again and whops, 20.1 seconds. Is there something like a "total number of operations" or something that will be completely deterministic between runs that you can use to measure performance? I am coding C++ in Visual Studio

Oscar
  • 279
  • 1
  • 10
  • This talk gives a good insight into problems with performance measurement and how one can deal with it: ["Performance Matters" by Emery Berger](https://www.youtube.com/watch?v=r-TLSBdHe1A) – t.niese Jun 02 '20 at 19:00
  • While this is indeed an interesting question, id hardly can be answered in a good/complete way in the scope of an Q&A platform. There are so many things that have to be considered, that have an impact on performance, and also problems around the different ways on how performance can be measured. – t.niese Jun 02 '20 at 19:06
  • Maybe I can make the question a bit more easily answerable by instead asking, "How can I determine if a change made to my code was "good" or not?" Since that's essentially what I want to know – Oscar Jun 02 '20 at 19:13
  • 1
    That's sadly nor easily possible. There are certain things that are more obviously an optimization, but for many parts is really hard to tell. Every change you do has an impact on the generated machine code and on the memory layout. So if you do a change in the code that improves performance, it does not necessarily mean that it performs better as the previous code when you do further changes to your code, it just might happen that you triggered a side condition. So you basically need to simulate those effects. I suggest you watch that talk. – t.niese Jun 02 '20 at 19:26
  • Will do, thank you! – Oscar Jun 02 '20 at 19:27
  • You may be able to get some measurements by determining the clock cycles of the assembly instructions that were emitted. However, on some processors, the clock cycles are not consistent because of things like data cache, instruction cache, parallel adding, etc. – Thomas Matthews Jun 02 '20 at 21:06
  • @Oscar For a random example of why it is between hard and hopeless to do it "on paper" see [Adding a redundant assignment speeds up code when compiled without optimization](https://stackoverflow.com/questions/49189685/adding-a-redundant-assignment-speeds-up-code-when-compiled-without-optimization). – dxiv Jun 02 '20 at 22:25
  • If it's okay for you to measure difference between two versions of code with inaccuracy in range of 1-2 seconds, then there are ways to do that; memory alignment, CPU caches and clock speed wouldn't be an issue. Actually, it's a more statistics related question that software related as you need to look at how to calculate your results and interpret it rather than how to implement measurements. Try to look at the notion of `95 percentile` in relation to performance testing. – cassandrad Jun 03 '20 at 10:42

0 Answers0