In my Java program I have to frequently compute a^b, where a ≥ 0 and 0 ≤ b < 1. I am not satisfied with the speed of Math.pow(a, b)
. Is there any algorithm that could be potentially faster than Math.pow
(at some cost of accuracy)?

- 61
- 5
-
Maybe shorter parts of the Taylor series (http://stackoverflow.com/questions/4453421/implement-generic-power-function-without-using-math-pow-in-java)? Depending on the application, caching results may be useful – J Fabian Meier Mar 08 '17 at 14:41
-
1You might get a faster result with `exp(b*log(a))` or its basis 2 variant if accessible. This will in some cases give obvious floating point errors, but avoids the overhead that the more professional versions of `pow` include to reduce these errors. – Lutz Lehmann Mar 08 '17 at 14:50
-
Do you have more information about the distribution of these a and b ? Are you allowed to precompute tables ? Isn't there a workaround to avoid the need to compute these powers ? ... More context is useful. – Mar 08 '17 at 15:22
-
You could try following approach. y = a^b = (m * 2^e)^b = m ^b * 2^(e*b) = m^b * 2^(e*b - E) * 2^E where, E = nearestint(e*b) and |e*b-E| < 1/2 m^b, and 2^(e*b-E) can be calculated by polynomial and you can chose degree of polynomial based on required accuracy. I use Sollya to find polynomial [link](http://sollya.gforge.inria.fr/) – kanna Mar 08 '17 at 21:45
2 Answers
Do you need the precise result, or just an approximation?
If a precise result: That's tough.
If you only want an approximation: Try the Nth root algorithm for a fix point iteration as described on wikipedia.
You can then waste use as many clock-cycles for your nth-root approximation as you see fit for your application.
But I guess the Math.pow
is already doing that for you. If you want to do it faster, you have to invest some severe work. Even in C (which I know way better than Java) I think writing a really freakin' fast variant of that fix point iteration is not the easiest task (if I was doing nth root computations all the time, and they were the absolute bottle neck, I would resort to multithreading and, if possible, doing a lot of work with SIMD/AVX instructions - or even OpenCL/CUDA).

- 270
- 1
- 8
-
I also like the question linked by @uzr. The question itself & the accepted answer concern integer exponentiation, which is actually not that useful in your case. **But the point by user "sbi" about "Breakthrough speedups" is very good** and quite applicable in any program optimization case - and might be worth considering for you. – ArchimedesMP Mar 08 '17 at 15:11
-
I believe its hard to find better optimized libraries for math than the standard math library in Java.
Maybe you should look at other optimizations that can be done in your algorithm, like caching frequent values so they can be re-used instead of calculated again and therefore reducing the number of calculations made.
If its mission critical that finding the the n-th root very fast maybe you should consider another language that might has a faster execution time than Java or can perform the power operation more efficiently.
This might be interesting even though the experiment conducted was done in C
What is more efficient? Using pow to square or just multiply it with itself?