13

I would like to ask time complexity of the following code. Is it O(n)? (Is time complexity of Math.pow() O(1)? ) In general, is Math.pow(a,b) has time complexity O(b) or O(1)? Thanks in advance.

public void foo(int[] ar) {
   int n = ar.length;
   int sum = 0;
   for(int i = 0; i < n; ++i) {

     sum += Math.pow(10,ar[i]);

   }
}
OmG
  • 18,337
  • 10
  • 57
  • 90
seriously divergent
  • 284
  • 1
  • 2
  • 12
  • if you only want integer powers, writing your own power function is better. As there are only a few powers of 10 that fit in an int, simply use a lookup table and you're good with O(1) – phuclv Sep 06 '15 at 04:54
  • @LưuVĩnhPhúc, but doing iteratively means you do b operations for a^b. isnt it?. In the original question, if the array length is n, then the range of array is between 0 and n, and the max is at least n-1. Thus above code will have complexity O(n^2) isn it? – seriously divergent Sep 06 '15 at 19:29
  • Where did I say thay you do iteratively for powers? What I said is a [lookup table](https://en.wikipedia.org/wiki/Lookup_table) like `int pow10[] = {1, 10, 100, 1000,... 1000000000};` and to get `Math.pow(10, i)` simply use `pow10[i]`. Even if you do multiply manually, you only need O(log n) with [exponentiation by squaring](https://en.wikipedia.org/wiki/Exponentiation_by_squaring) ([The most efficient way to implement an integer based power function pow(int, int)](http://stackoverflow.com/q/101439/995714)), not O(n) like doing it iteratively – phuclv Sep 07 '15 at 05:08
  • Oh I see what you meant, thank you! – seriously divergent Sep 08 '15 at 22:06

2 Answers2

12

@Blindy talks about possible approaches that Java could take in implementing pow.

First of all, the general case cannot be repeated multiplication. It won't work for the general case where the exponent is not an integer. (The signature for pow is Math.pow(double, double)!)

In the OpenJDK 8 codebase, the native code implementation for pow can work in two ways:

  • The first implementation in e_pow.c uses a power series. The approach is described in the C comments as follows:

    * Method:  Let x =  2   * (1+f)
    *      1. Compute and return log2(x) in two pieces:
    *              log2(x) = w1 + w2,
    *         where w1 has 53-24 = 29 bit trailing zeros.
    *      2. Perform y*log2(x) = n+y' by simulating multi-precision
    *         arithmetic, where |y'|<=0.5.
    *      3. Return x**y = 2**n*exp(y'*log2)
    
  • The second implementation in w_pow.c is a wrapper for the pow function provided by the Standard C library. The wrapper deals with edge cases.

Now it is possible that the Standard C library uses CPU specific math instructions. If it did, and the JDK build (or runtime) selected1 the second implementation, then Java would use those instructions too.

But either way, I can see no trace of any special case code that uses repeated multiplication. You can safely assume that it is O(1).


1 - I haven't delved into how when the selection is / can be made.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • It is never necessary to use repeated multiplication. There is an *O(log N)* algorithm for integer exponentiation. – user207421 Sep 06 '15 at 01:25
  • @EJP - 1) What is "necessary" and what actually happens are not the same thing. 2) For `int` and `long`, you can probably tune the algorithm with special cases so that `O(N)` is faster than `O(logN)` in the range of `N` and `X` where you don't get integer overflow. – Stephen C Sep 06 '15 at 02:13
  • @EJP E.g. treat X == 0, 1 and 2 and X < 0 as special cases. If you deal with the X==2 case using shifts, log3(2^32) is < 21; i.e. X >= 3 requires 21 or less rounds of repeated multiplication. – Stephen C Sep 06 '15 at 02:25
6

You can consider Math.pow to be O(1).

There's a few possible implementations, ranging from a CPU assembler instruction (Java doesn't use this) to a stable software implementation based on (for example) the Taylor series expansion over a few terms (though not exactly the Taylor implementation, there's some more specific algorithms).

It most definitely won't repeatedly multiply if that's what you're worried about.

Blindy
  • 65,249
  • 10
  • 91
  • 131
  • 1
    Why wouldn't Java use the assembler instruction? The implementation is native. – chrylis -cautiouslyoptimistic- Sep 06 '15 at 00:18
  • Nobody does, including C. The hardware implementation isn't fully standards compliant (in respect to NaN handling IIRC). You have to explicitly force the C compiler to use hardware instructions (`-ffastmath` for gcc for example). – Blindy Sep 06 '15 at 02:34
  • @chrylis No computer architecture I know has an instruction for calculating powers. The simplest implementation would be `exp(b*log(a))` – phuclv Sep 06 '15 at 04:48