9

I've encountered non-optimal code in several open source projects, when programmers do not think about what they are using.

There is up to a 10 times performance difference between two cases, because of Math.Pow use Exp and Ln functions in internal, how it is explained in this answer.

The usual multiplication is better than powering in most cases (with small powers), but the best, of course, is the Exponentation by squaring algorithm.

Thus, I think that the compiler or JITter must perform such optimization with powers and other functions. Why is it still not introduced? Am I right?

Community
  • 1
  • 1
Ivan Kochurkin
  • 4,413
  • 8
  • 45
  • 80
  • 4
    Most likely because this is a rare case that can be worked around quite easily if you need it. – svick Sep 22 '12 at 10:09
  • The compiler probably doesn't optimize it because to the compiler, `Math.Pow` is just a function call to a method in another assembly that could be replaced at any time. – O. R. Mapper Sep 22 '12 at 10:12
  • 3
    The compiler or jitter certainly could optimize this, but since the workaround is trivial, other optimizations it's lacking are much more important. – CodesInChaos Sep 22 '12 at 10:18

2 Answers2

7

Read the anwser you've referenced again, it clearly states that CRT uses a pow() function which Microsoft bought from Intel. The example you see using Math.Log and Math.Exp is an example the writer of the article has found in a programming book.

The "problem" with general exponentiation methods is that that they are build to produce the most accurate results for all cases. This often results in sub-optimal performance for certain cases. To increase the preformance of these certain cases, conditional logic must be added which results in performance loss for all cases. Because squaring or cubing a value is that simple to write without the Math.Pow method, there is no need to optimize these cases and taking the extra loss for all other cases.

stakx - no longer contributing
  • 83,039
  • 20
  • 168
  • 268
zeebonk
  • 4,864
  • 4
  • 21
  • 31
0

i would say that would be a bad idea, because both methods do NOT return the same results every time.

here is a small test script

        var r = new Random();

        var any = Enumerable.Range(0, 1000).AsParallel().All(p =>
            {
                var d = r.NextDouble();

                var pow = Math.Pow(d, 2.0);
                var sqr = d * d;

                var identical = pow == sqr;
                if (!identical)
                    MessageBox.Show(d.ToString());

                    return identical;
            });

there are different accuracies of both implementations. if a reliable calculation is done, it should be reproducable. if for example just in the release implementation the square optimization would be used, then the debug and release version would return different solutions. that can be quite a mess for error debugging ...

user287107
  • 9,286
  • 1
  • 31
  • 47
  • When one write Math.Pow(x, 2), meaning of it practically always is x * x. So, Math.Pow(x, 2) can be optimized to x * x either debug and release modes for identities results. – Ivan Kochurkin Sep 22 '12 at 11:44
  • 1
    no, because if you would implement this check in the Math.Pow function, you would loose performance for normal cases. if you would implement it JIT, then you get the difference between the debug and release version, because JIT optimizations are deactivated in the debug version. – user287107 Sep 22 '12 at 11:46