9

Yesterday I saw a question asking why Math.pow(int,int) is so slow, but the question was poorly worded and showed no research effort, so it was quickly closed.

I did a little test of my own and found that the Math.pow method actually did run extremely slow compared to my own naive implementation (which isn't even a particularly efficient implementation) when dealing with integer arguments. Below is the code I ran to test this:

class PowerTest {

    public static double myPow(int base, int exponent) {
        if(base == 0) return 0;
        if(exponent == 0) return 1;
        int absExponent = (exponent < 0)? exponent * -1 : exponent;
        double result = base;
        for(int i = 1; i < absExponent; i++) {
            result *= base;
        }
        if(exponent < 1) result = 1 / result;
        return result;
    }

    public static void main(String args[]) {
        long startTime, endTime;

        startTime = System.nanoTime();
        for(int i = 0; i < 5000000; i++) {
            Math.pow(2,2);
        }
        endTime = System.nanoTime();
        System.out.printf("Math.pow took %d milliseconds.\n", (endTime - startTime) / 1000000);

        startTime = System.nanoTime();
        for(int i = 0; i < 5000000; i++) {
            myPow(2,2);
        }
        endTime = System.nanoTime();
        System.out.printf("myPow took %d milliseconds.\n", (endTime - startTime) / 1000000);
    }

}

On my computer (linux on an intel x86_64 cpu), the output almost always reported that Math.pow took 10ms while myPow took 2ms. This occasionally fluctuated by a millisecond here or there, but Math.pow ran about 5x slower on average.

I did some research and, according to grepcode, Math.pow only offers a method with type signature of (double, double), and it defers that to the StrictMath.pow method which is a native method call.

The fact that the Math library only offers a pow function that deals with doubles seems to indicate a possible answer to this question. Obviously, a power algorithm that must handle the possibility of a base or exponent of type double is going to take longer to execute than my algorithm which only deals with integers. However, in the end, it boils down to architecture-dependent native code (which almost always runs faster than JVM byte code, probably C or assembly in my case). It seems that at this level, an optimization would be made to check the data type and run a simpler algorithm if possible.

Given this information, why does the native Math.pow method consistently run much slower than my un-optimized and naive myPow method when given integer arguments?

Woodrow Barlow
  • 8,477
  • 3
  • 48
  • 86
  • I comment about this: int absExponent = (exponent < 0)? exponent * -1 : exponent; You don't need the '*': int absExponent = (exponent < 0)? - exponent : exponent; – Jose Luis Jan 06 '16 at 16:42
  • that's not even an optimal (integer) power algorithm - an efficient one does `if (absExponent & 1 == 0) { result *= result; absExponent >>= 1 }` - i.e. it runs in `O(log n)` instead of `O(n)` time. – Alnitak Jan 06 '16 at 16:52
  • Since your implementation uses a loop on the exponent, computing a square is a very biased benchmark. Why don't you try both of them on 2⁸ :) (Of course, you could do better with a less naïve algorithm, But even then, 2¹⁵ might show you a different result.) – rici Jan 06 '16 at 21:14

4 Answers4

12

As others have said, you cannot just ignore the use of double, as floating point arithmetic will almost certainly be slower. However, this is not the only reason - if you change your implementation to use them, it is still faster.

This is because of two things: the first is that 2^2 (exponent, not xor) is a very quick calculation to perform, so your algorithm is fine to use for that - try using two values from Random#nextInt (or nextDouble) and you'll see that Math#pow is actually much quicker.

The other reason is that calling native methods has overhead, which is actually meaningful here, because 2^2 is so quick to calculate, and you are calling Math#pow so many times. See What makes JNI calls slow? for more on this.

  • 3
    aha! i replaced all of my arguments with a call to `nextInt(1000)`, and now the `Math.pow` method runs much faster than my implementation. so it turns out to have been a bad test case. – Woodrow Barlow Jan 06 '16 at 16:57
1

There is no pow(int,int) function. You are comparing apples to oranges with your simplifying assumption that floating point numbers can be ignored.

erickson
  • 265,237
  • 58
  • 395
  • 493
  • but it gets passed down to native code. surely the optimization would be made at the native level? – Woodrow Barlow Jan 06 '16 at 16:44
  • @WoodrowBarlow The values passed to native code are `double`. Are you suggesting that if the native code detects no fractional part, that an alternate path should be used? A path where the arguments are cast to an integral type and operated on with integral operators? – erickson Jan 06 '16 at 16:47
  • well, sure. is there a reason why that would be a bad idea? and if that is a bad idea, why not overload the pow method with an integer-type variant? to me, it seems like it would be a high-value optimization, but i understand that there's probably something i'm missing. – Woodrow Barlow Jan 06 '16 at 16:49
  • That checking and branching will take extra time every time `pow()` is called, and the integer argument case is atypical. Test performance when raising to a larger power and you'll see that the test isn't as simple as "is there a fractional part?" So, you'd be slowing down the typical use case for little benefit. Overloading the `pow()` function would avoid this problem, but it might not be worth cluttering the API with such a specialized function when it's so easy to implement yourself. – erickson Jan 06 '16 at 16:54
  • But in the case of a small constant integer for the second parameter, the compiler could do the optimization. – NetMage Nov 28 '22 at 22:51
0

Math.pow is slow because it deals with an equation in the generic sense, using fractional powers to raise it to the given power. It's the lookup it has to go through when computing that takes more time.

Simply multiplying numbers together is often faster, since native calls in Java are much more efficient.

Edit: It may also be worthy to note that Math functions use doubles, which can also take a longer amount of time than using ints.

M Lizz
  • 141
  • 5
0

Math.pow(x, y) is probably implemented as exp(y log x). This allows for fractional exponents and is remarkably fast.

But you'll be able to beat this performance by writing your own version if you only require small positive integral arguments.

Arguably Java could make that check for you, but there would be a point for large integers where the built-in version would be faster. It would also have to define an appropriate integral return type and the risk of overflowing that is obvious. Defining the behaviour around a branch region would be tricky.

By the way, your integral type version could be faster. Do some research on exponentiation by squaring.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483