1

Today, my colleague berated me for using pow(x, 2.0). He (a reasonably high-rep user on this site) insisted I wrote x * x as he claimed it would be faster and clearer

But why? Wouldn't pow know to optimise anyway? Surely my approach is clearer?

P45 Imminent
  • 8,319
  • 4
  • 35
  • 78

3 Answers3

3

May I suggest one thing? pow( x, 2.0 ) may be as fast as x * x when you have good compiler and proper optimization level. This will NOT be faster, though. It may be slower. For that particular reason I would still do x * x.

Grzegorz
  • 3,207
  • 3
  • 20
  • 43
1

If being clear is the utmost criteria (and it probably should be), I'd argue that squared(x) is even more readable than either pow(x, 2.0) or x*x. That's a trivial function to write, and you can use whichever implementation benchmarks to be fastest on your compiler and machine.

inline double squared(double x) { return x*x; } // or return pow(x, 2.0);
Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
0

Probably they want to say that calling pow(x, 2.0) implies stack operations to save arguments and registries, and program counter movements, so it is better and simpler to do x*x if you have to compute x^2. But I think that is a very extreme optimisation.

gior91
  • 1,745
  • 1
  • 16
  • 19
  • It's not as extreme an optimization as you might think. `pow` is already a special case that can often be replaced by an [intrinsic function](http://en.wikipedia.org/wiki/Intrinsic_function) that incurs no calling overhead. If the compiler does that, it's a very simple check to see if the second parameter is a constant 2.0. – Mark Ransom Mar 13 '14 at 22:27
  • But We don't know how the compiler works in this cases. Do you? – gior91 Mar 14 '14 at 13:01
  • 1
    I've never tested this particular case to see if it's optimized, because I've never needed to. I was just pointing out that it's not impossible. I've seen even more drastic optimizations before. – Mark Ransom Mar 14 '14 at 17:00