3

The normal recurrence of Fibonacci is Fib(N)= Fib(N-1)+Fib(n-2) which gives time complexity of 2^N. To reduce time complexity we have a formula below:

Fib(N) = [Phi^N – phi^N] / Sqrt[5] Source

where Phi= (1 + Sqrt[5]) / 2 and phi = (1-Sqrt[5])/2; or phi=Phi-1 Source

My code in java is below:

import java.util.*;
import java.lang.*;
public class Main
{
    public static void main(String[] args) {
        System.out.println(fib(50)+"");
    }
    
    static long fib(int N){
        final double sqrtFive=Math.sqrt(5);
        double Phi=(sqrtFive+1)/2;
        double phi=Phi-1;
        double fibN=(Math.pow(Phi,N)-Math.pow(phi,N))/sqrtFive;
        return (long) fibN;
    }
}

What is the time complexity of my code?

O(1)? because modern computer are superfast in computation so Math.pow(base,power) will be almost constant for any power.

O(logn+logn) means O(logn)? because I'm doing Math.pow(Phi,N)-Math.pow(phi,N) and Math.pow() function takes logN time.

I'm confused, please help me.

Olivier
  • 13,283
  • 1
  • 8
  • 24
CodingDevil
  • 125
  • 1
  • 9
  • 1
    It is *O(1)* because there is nothing in the formula that happens more than once. Nothing to do with processor speed. – user207421 Apr 17 '22 at 07:19
  • 1
    hey @user207421 , but Math.pow(A,N) function is multiplying A upto N times to find power. so is not that counted int time complexity? see "double fibN=(Math.pow(Phi,N)-Math.pow(phi,N))/sqrtFive;" – CodingDevil Apr 17 '22 at 07:21
  • 1
    @CodingDevil: no, `pow(A, N) == exp(N * log(A))` where both `exp` and `log` are built-in FPU commands – Dmitry Bychenko Apr 17 '22 at 07:25
  • 1
    Exponentiation by squaring is logarithmic, so yeah definitely not constant. The real problem is that your Fibonacci function will probably already break for Ns around 30 if I'd to guess thanks to fp inaccuracies. – Voo Apr 17 '22 at 07:25
  • 1
    `Math.pow()` only loops N times if it's implemented very badly in software, which it almost certainly isn't. – user207421 Apr 17 '22 at 07:25
  • 1
    @user207421 Generally when analyzing algorithms one is talking about the mathematical algorithm. Otherwise if you're considering hardware limitations literally every algorithm is solvable in constant time (very, very large constant times, but still constant) – Voo Apr 17 '22 at 07:27
  • Expect rounding errors for large N, though. – user3840170 Apr 17 '22 at 07:35
  • @Voo That is simply untrue. Most equations depend on variables, so you cannot know the answer in advance, so you cannot compute them in constant time. The Fibonacci relation without using Binet's formula as here is a case in point. You don't know N in advance. Unless you are also positing infinite lookup tables? – user207421 Apr 17 '22 at 07:36
  • @dmitry By that argument the traveling salesman problem is constant as well.. "your hardware can only represent K nodes, for K nodes we have at most K^K operations which is constant". Your built-in hardware operation can only handle pretty small numbers, what do you do when the numbers are bigger than that limitation? That's the relevant part (and you definitely can't get the power of two arbitrarily large numbers in constant time to their size) – Voo Apr 17 '22 at 07:40
  • @user207421 Hardware isn't infinite, which means I only need a finite lookup table, but that's also not necessary - if you limit the size of your input (as you have to do for your hardware), every function is constant in Big(O) notation, by definition. For any f(n) with n < K I can find a constant that's always larger than f for any valid input. – Voo Apr 17 '22 at 07:43
  • @Voo We are computing time complexity on the basis of input size, here pow() function does computation based on input. so time complexity cant be constant, I think. – CodingDevil Apr 17 '22 at 07:49
  • @DmitryBychenko may I know complexity of 'exp' and 'log' built-in commands? PS: Your comment was, "pow(A, N) == exp(N * log(A)) where both exp and log are built-in FPU commands" thanks – CodingDevil Apr 17 '22 at 07:53
  • @CodingDevil: In case of **x87** FPU you can use `F2XM1` for `2**x - 1` and `FY2XP1` to get `y*log(x + 1)`; both commands have `O(1)` time complexity; and you have to add and subtract `1` which is also `O(1)` – Dmitry Bychenko Apr 17 '22 at 08:18
  • @dmitry If you're saying the algorithm is O(1) that means you need the approximately same time to compute the power of two 100 bit numbers as it takes for two one million bit numbers. I'm doubtful that you can manage that (what with it being provably impossible) – Voo Apr 17 '22 at 08:23
  • @Voo: you are quite right for *arbitrary precision numbers* (`100` or may be million bits long). My comments are about `double` (which is used in the question), and `double` has fixed length of 64 bit; that's why we have `O(1)` in this case. – Dmitry Bychenko Apr 17 '22 at 09:45
  • 1
    Understanding complexity of actual programs requires some sort of statement of a computational model and how the actual program maps into that computational model. This is always true (but often disregarded), but it's most important when you're using floating point (and need an accurate result) and also when the result of the function is quickly out of range for standard types. Here, you've got floating point, a complex operation (pow), large numbers, and a need for high accuracy if you want correct results. – Paul Hankin Apr 17 '22 at 10:38
  • People are arguing about the complexity of exponentiation, but that's not what OP's question is about. His computation uses `java.lang.Math.pow()`, which is O(1). If OP doubles the size of N, or halves it, the computation takes the same amount of time. – President James K. Polk Apr 17 '22 at 13:41
  • @President James K. Polk , math.pow() will take O(logn) time complexity or O(1)?? – CodingDevil Apr 17 '22 at 13:44
  • `java.lang.Math.pow()` is O(1). But all `double` arithmetic is limited in precision, so using `java.lang.Math.pow()` means you're only getting an approximation to answer. – President James K. Polk Apr 17 '22 at 13:57
  • @PresidentJamesK.Polk Yeah, looks like the only "range" where it's correct is [from N=36 to N=71](https://tio.run/##ZVDBboMwDL3zFRanZBRaJu0yhnbbrWhSj9MOZgWaLk1YCJ2qqd/OnIy2dA1SHL/3/LC9xT3G2/XnMIhdq42FLQFJb4VM7rJgiklUjcPavpTiAz4kdh0sUajgJwA6I95ZtBT2WqxhRyxbWSNU8/YOaJqOw5/YHalVAwg5LGZQUkizM1VrA0woC4WjMwpPkC7cI4qmFmcbXdekrEXJCg4xYHYlWR06W@0S3dukpWasVKyACEL6olNRBMyb0P/gGcIQHon1UHnwOvekGh5ynt02gG4CJEF5Tbr5/kFeOcGOweX217hC7@u684vgl6lroVDCWtPCK@i@jH0R@ypfot0kLmMPkwZH1etG5OwkjVI@v7@RtCQhWZzeMNRDkTPv3@pvRqJZweNz3vqcz0/2l3pT2d4oYG4S7m2ycdbjMPwC). Below or above it's wrong in various ways. – Kelly Bundy Apr 17 '22 at 15:38
  • (Below 36, their formula (as implemented by them) isn't good enough. Above 71, `double` isn't good enough. Above 92, `long` also isn't good enough. And at 1475, the `double` becomes "Infinity".) – Kelly Bundy Apr 17 '22 at 15:48
  • @Dmitry My main point is that asymptotic complexity only makes sense to describe an algorithm - it's a theoretical concept that doesn't make sense for actual implementations. Arrays in Java are limited to less than 2^31 elements, so by definition sort is O(1) in Java. You can use Java to describe the algorithm, but it's just a concise shorthand. Although I guess some schools tend to not teach that distinction. Big O performance is really not a useful real world characteristic (as all those galactic algorithms demonstrate). – Voo Apr 18 '22 at 18:24

1 Answers1

-1

Time Complexity is calculated in terms of the amount of iterations. This method of calculating fibonacci is actually O(1) because everything is done in 1 iteration without doing any recursion or looping.

Edit: If you only call this fibonacci method a couple times the runtime of the pow() operations doesnt matter. However, the more the method is called you do sorta have to take into account for the runtime of pow() as well. But in terms of Big O Notation, I would say that this is O(1)

  • 1
    "Time Complexity is calculated in terms of the amount of iterations." That's absolutely wrong. Do you think that a recursive algorithm (no loop anywhere in sight) is always constant in time? – Voo Apr 17 '22 at 08:00
  • @BooleanCube thanks for your answer, Time complexity is calculated on the basis of how much computation our CPU does for given input size. My question was that if N is input then changing it changes the time complexity or not as we are using two Pow() functions. one pow function take logn time. – CodingDevil Apr 17 '22 at 08:06