2

Say I have methods that have a variable and a constant for pow/log:

double LogX(int x){
   return Math.Log(x, constantBase);
}

double PowX(int x){
   return Math.Pow(constant, x);
}

I am not sure what the correct time complexity for these is when using the Math functions. My conceptual impression is that PowX would be O(n) since it has to multiply the constant n-1 times, but I know there's other ways to implement power functions and couldn't find a clear answer as to what I should assume there. If it is constant time, is it then O(n)? I'm similarly unsure of how to approach LogX correctly.

I am using C# if that matters, but I'm looking for a general understanding of this. If I just had an equation f=constant^x or f=log(x), would I have a different answer for those time complexities than if I did it via the Math functions?

filaments
  • 197
  • 1
  • 15
  • If your constant is 2 or a power of two, then `2^x` can be computed more efficiently using bitshifts, therefore not having a runtime of `O(n)`. Maybe you can assume that C# implements the math functions in the fastest know way, then you could look up the fastest algorithm for doing `log()` and `pow()` and use these. – Maximilian Gerhardt Dec 13 '15 at 01:20
  • Take a peek at the code... – ChiefTwoPencils Dec 13 '15 at 01:21
  • 1
    Yours is a good question. [Answers to this question](http://stackoverflow.com/questions/8870442/how-is-math-pow-implemented-in-net-framework) may be helpful. – displayName Dec 13 '15 at 01:22
  • I did see that question, but I don't think I entirely understand the answer. I'm fairly new to programming and am not familiar with many of those terms. Would the Math.Log have a similar approach? – filaments Dec 13 '15 at 01:36
  • @filaments: I'm guessing that Math.Log() would have a complexity O(#bits) because log calculation can be done by left shifting bits one by one. – displayName Dec 13 '15 at 01:41

1 Answers1

2

For numbers that have limited bit-ness (like float/double/int) you can consider these functions (along with other once like sin/cos) as O(1) since there is well defined bound for them when they exectured by floating-point processor, and even if implemented manually series used to compute functions can be limited with fixed number of elements.

For arbitrary long numbers you'd have to use significantly more complicated estimates which also will have to be directly tied to implementation you have.

I.e. whole powers like bigNumber^n (where n is integer) can be computed in O(log n) multiplications (sample), complexity each of multiplication in turn would depend on number of bits in each number (for single multiplication O(initial_length_in_bits ^ 2), for all multiplications I'd guess ^3). So resulting complexity would be something like O(log n * initial_length_in_bits ^ 3).

Community
  • 1
  • 1
Alexei Levenkov
  • 98,904
  • 14
  • 127
  • 179