34

I was required to write a simple implementation of Fibonacci's algorithm and then to make it faster.

Here is my initial implementation

public class Fibonacci {

    public static long getFibonacciOf(long n) {
        if (n== 0) {
            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            return getFibonacciOf(n-2) + getFibonacciOf(n-1);
        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

}

As you can see I am using System.currentTimeMillis() to get a simple measure of the time elapsed while computed Fibonacci.

This implementation get rapidly kind of exponentially slow as you can see on the following picture

simple version of fibonacci's algorithm

So I've got a simple optimisation idea. To put previous values in a HashMap and instead of re-computing them each time, to simply take them back from the HashMap if they exist. If they don't exist, we then put them in the HashMap.

Here is the new version of the code

public class FasterFibonacci {

    private static Map<Long, Long> previousValuesHolder;
    static {
        previousValuesHolder = new HashMap<Long, Long>();
        previousValuesHolder.put(Long.valueOf(0), Long.valueOf(0));
        previousValuesHolder.put(Long.valueOf(1), Long.valueOf(1));
    }
    public static long getFibonacciOf(long n) {
        if (n== 0) {

            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            if (previousValuesHolder.containsKey(Long.valueOf(n))) {
                return previousValuesHolder.get(n);
            } {

                long newValue = getFibonacciOf(n-2) + getFibonacciOf(n-1);
                previousValuesHolder.put(Long.valueOf(n),     Long.valueOf(newValue));
                return newValue;
            }

        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

This change makes the computing extremely fast. I computes all the values from 2 to 103 in no time at all and I get a long overflow at F(104) (Gives me F(104) = -7076989329685730859, which is wrong). I find it so fast that **I wonder if there is any mistakes in my code (Thank your checking and let me know please) **. Please take a look at the second picture:

Faster Fibonacci

Is my faster fibonacci's algorithm's implementation correct (It seems it is to me because it gets the same values as the first version, but since the first version was too slow I could not compute bigger values with it such as F(75))? What other way can I use to make it faster? Or is there a better way to make it faster? Also how can I compute Fibonacci for greater values (such as 150, 200) without getting a **long overflow**? Though it seems fast I would like to push it to the limits. I remember Mr Abrash saying 'The best optimiser is between your two ears', so I believe it can still be improved. Thank you for helping

[Edition Note:] Though this question adresses one of the main point in my question, you can see from above that I have additionnal issues.

Community
  • 1
  • 1
alainlompo
  • 4,414
  • 4
  • 32
  • 41
  • 12
    You are not getting a stack overflow. You are getting long overflow, since the result exceeds the max value of the long type. You can use BigInteger instead of long. – Eran Mar 28 '15 at 12:58
  • 5
    Try writing an iterative version of the algorithm – fvannee Mar 28 '15 at 12:58
  • 1
    I would give you some hints. First as suggested by @fvannee , implement iterative version and secondly, focus on what minimum things you require to compute F(n). Are you aware of DP? – Pranalee Mar 28 '15 at 13:01
  • 2
    You can use matrix (it's only 2x2, don't worry) exponentiation (by squaring), that only helps for very large numbers though, like F(500+) – harold Mar 28 '15 at 13:01
  • @Eran, that's right, sorry for the mistake: I will edit it – alainlompo Mar 28 '15 at 13:04
  • @fvannee I will try that and let you know, thanks !!! – alainlompo Mar 28 '15 at 13:04
  • @Pranalee, no I am not aware of DP, ...unless you mean Design Pattern. Otherwise I am willing to learn – alainlompo Mar 28 '15 at 13:05
  • @harold, not sure I am know how to do that...If you could develop or share links or more hints,... thank you – alainlompo Mar 28 '15 at 13:07
  • no by DP, I meant dynamic programming – Pranalee Mar 28 '15 at 13:07
  • @Eran, just corrected the long overflow. Thanks again – alainlompo Mar 28 '15 at 13:10
  • 1
    To correct the long overflow use the `BigInteger` instead of longs as fibonacci numbers – MinecraftShamrock Mar 28 '15 at 13:10
  • @Pranalee, yes I am aware of dynamic programming, but will need a pointer or hints from you to see how to use it in this case. Thank you in advance – alainlompo Mar 28 '15 at 13:11
  • @MinecraftShamrock, thank you. I will do that asap and see how far I can go with it... Do you think I may be able to compute F(500) with it? Or maybe even more? – alainlompo Mar 28 '15 at 13:12
  • 1
    @alainlompo `BigInteger` can handle integers of any size but at some point you will run out of memory because you need to store too many too big numbers. If you keep all smaller fibonacci numbers in memory you won't get very far, but if you always only keep the last to numbers in memory, you will probably be able to compute F(500) – MinecraftShamrock Mar 28 '15 at 13:15
  • @MinecraftShamrock, if you always only keep the last 2 (two) numbers in memory? Great idea... I am excited about reaching a new level with the ideas you guys are sharing with me... I can't wait to get to the code again...Thanks – alainlompo Mar 28 '15 at 13:17
  • Do you use Java 8? If yes you could parallelize all this and use a `ConcurrentHashMap` along with `.computeIfAbsent()`. – fge Mar 28 '15 at 13:41
  • @fge, yes I can use java 8. Thanks for the hint: will try it – alainlompo Mar 28 '15 at 21:34
  • I'm voting to close this question as off-topic because it is a code review request – Raedwald Mar 31 '15 at 06:28

8 Answers8

30

Dynamic programming

Idea:Instead of recomputing the same value multiple times you just store the value calculated and use them as you go along.

f(n)=f(n-1)+f(n-2) with f(0)=0,f(1)=1. So at the point when you have calculated f(n-1) you can easily calculate f(n) if you store the values of f(n) and f(n-1).

Let's take an array of Bignums first. A[1..200]. Initialize them to -1.

Pseudocode

fact(n)
{
if(A[n]!=-1) return A[n];
A[0]=0;
A[1]=1;
for i=2 to n
  A[i]= addition of A[i],A[i-1];
return A[n]
}

This runs in O(n) time. Check it out yourself.

This technique is also called memoization.

The IDEA

Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past'.

If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. (I have used this idea).

2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming. (MinecraftShamrock used this idea)


There's more!

(Other ways to do this)

Look our quest to get a better solution doesn't end here. You will see a different approach-

If you know how to solve recurrence relation then you will find a solution to this relation

f(n)=f(n-1)+f(n-2) given f(0)=0,f(1)=1

You will arrive at the formula after solving it-

f(n)= (1/sqrt(5))((1+sqrt(5))/2)^n - (1/sqrt(5))((1-sqrt(5))/2)^n

which can be written in more compact form

f(n)=floor((((1+sqrt(5))/2)^n) /sqrt(5) + 1/2)

Complexity

You can get the power a number in O(logn) operations. You have to learn the Exponentiation by squaring.

EDIT: It is good to point out that this doesn't necessarily mean that the fibonacci number can be found in O(logn). Actually the number of digits we need to calculate frows linearly. Probably because of the position where I stated that it seems to claim the wrong idea that factorial of a number can be calculated in O(logn) time. [Bakurui,MinecraftShamrock commented on this]

gsamaras
  • 71,951
  • 46
  • 188
  • 305
user2736738
  • 30,591
  • 5
  • 42
  • 56
  • @alainlompo While I appreciate that you helped OP by posting this answer, but sometimes its better to teach fishing than server ready fishes. – Pranalee Mar 28 '15 at 13:19
  • Doesn't this have the same complexity as 2nd one from OP, if we assume that the hash-map lookup is O(1)? – Mikhail Mar 28 '15 at 13:21
  • @Pranalee.: Sorry! I thought if he doesn't know it, I should atleast give him the idea. – user2736738 Mar 28 '15 at 13:21
  • @Pranalee, I am not sure I get this last remark, but sure I would rather learn to fish but be served fishing. I did no go back yet to implement the various ideas just for sake of time, but I will sure do it asap. Thank you for helping. – alainlompo Mar 28 '15 at 13:22
  • @Mikhail.:Yes! asymptotically they are same. But using simple array will give better running time I think. If you find anything wrong plz inform I will learn as well as change my post. – user2736738 Mar 28 '15 at 13:23
  • @Mikhail: Big O of the both the solution is same, but O(1) of hashmap is ideal situation. In practice, it won't be O(1) always. Also if you think, looking up in hashtable is always overhead (computing hashcode and comparing keys) rather than just picking values from last 2 variables. Also OP is using recursion , while this solution is iterative which improves performance considerably. – Pranalee Mar 28 '15 at 13:27
  • @alainlompo Don''t worry.. :) i just wanted you to first attempt some iterative/dp yourself before looking at solution – Pranalee Mar 28 '15 at 13:36
  • I prefer the matrix power, it's all integer so you don't have to worry about numerical issues, but it's also O(log n) – harold Mar 28 '15 at 13:56
  • @ You are talking about `matrix exponentiation`. That's also `log(n)`. But in both the cases one needs to know the idea of `characteristic equations`. That's why I simply mentioned it. – user2736738 Mar 28 '15 at 13:58
  • @amalsom Doesn't exponentiation by squaring use O(log n) multiplications and not O(log n) time? So the whole thing takes more than linear time according to [wikipedia](http://en.wikipedia.org/wiki/Exponentiation_by_squaring#Computational_complexity), doesn't it? – MinecraftShamrock Mar 28 '15 at 14:36
  • @MinecraftShamrock.:But asymptotically they will be same. thanks – user2736738 Mar 28 '15 at 16:41
  • @amalsom So is using the closed form and exponentiation better than linear time? – MinecraftShamrock Mar 28 '15 at 16:43
  • @MinecraftShamrock.: Look if you consider time complexity then second may seem good but `in O(n) algo we can store n elements at a time` but here only `log(n)`. So I don't think there's much difference. Infact I have never mentioned it. Look in the first case from `amortized analysis` you will get O(1) avg operation. In the second it is not. I have just stated it. – user2736738 Mar 28 '15 at 16:48
  • 1
    -1 for spreading the false information that you can compute the nth number in logn time. It's **false**. Fibonacci numbers grow exponentially, so you have to compute O(n) bits which **requires** O(n) complexity. The fact that you can use O(logn) multiplications doesn't change the fact that they require O(n) time. If you consider multiplication O(1) you are either using a wrong computational model or you are using bounded numbers. In the latter case you can compute the fibonacci numbers in O(1) time (since they are bounded...) – Bakuriu Mar 29 '15 at 12:51
  • @Bakuriu.: You probably misunderstood my answer. I should have stated it clearly. Check my answer now. Thanks for helping me clarifying my answer. – user2736738 Mar 29 '15 at 13:01
15

If you need to compute n th fibonacci numbers very frequently I suggest using amalsom's answer.

But if you want to compute a very big fibonacci number, you will run out of memory because you are storing all smaller fibonacci numbers. The following pseudocode only keeps the last two fibonacci numbers in memory, i.e. it requires much less memory:

fibonacci(n) {
    if n = 0: return 0;
    if n = 1: return 1;
    a = 0;
    b = 1;
    for i from 2 to n: {
        sum = a + b;
        a = b;
        b = sum;
    }
    return b;
}

Analysis
This can compute very high fibonacci numbers with quite low memory consumption: We have O(n) time as the loop repeats n-1 times. The space complexity is interesting as well: The nth fibonacci number has a length of O(n), which can easily be shown:
Fn <= 2 * Fn-1
Which means that the nth fibonacci number is at most twice as big as its predecessor. Doubling a number in binary is equivalent with a single left-shift, which increases the number of necessary bits by one. So representing the nth fibonacci number takes at most O(n) space. We have at most three successive fibonacci numbers in memory which makes O(n) + O(n-1) + O(n-2) = O(n) total space consumption. In contrast to this the memoization algorithm always keeps the first n fibonacci numbers in memory, which makes O(n) + O(n-1) + O(n-2) + ... + O(1) = O(n^2) space consumption.

So which way should one use?
The only reason to keep all lower fibonacci numbers in memory is if you need fibonacci numbers very frequently. It is a question of balancing time with memory consumption.

MinecraftShamrock
  • 3,504
  • 2
  • 25
  • 44
  • 1
    There is a [**closed form**](http://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression) which can be used to compute large Fibonacci numbers is `O(1)`. – Boris the Spider Mar 28 '15 at 15:44
  • 1
    This is a very sensible approach - it avoids the recursion catastrophe and is how you would probably do this if you were asked to find Fib(100) using pencil and paper (and you couldn't remember the closed form with the golden section). – Floris Mar 28 '15 at 15:53
  • @BoristheSpider That works in the same way as does building a fixed lookup table up to the maximum fibonacci number you can represent and looking it up. In fact a lookup table would be the fastest solution. Problem is: they work only for *bounded* numbers. If you remove the bounded assumption then the complexity **must** be O(n). – Bakuriu Mar 29 '15 at 13:01
  • @MinecraftShamrock, I implemented the code that you've shared using BigInteger. It is incredibly fast and can compute really BIG numbers. Computed F(2000) in 2 milliseconds...Thanks! – alainlompo Mar 29 '15 at 17:11
14

Get away from the Fibonacci recursion and use the identities

(F(2n), F(2n-1)) = (F(n)^2 + 2 F(n) F(n-1), F(n)^2+F(n-1)^2)
(F(2n+1), F(2n)) = (F(n+1)^2+F(n)^2, 2 F(n+1) F(n) - F(n)^2)

This allows you to compute (F(m+1), F(m)) in terms of (F(k+1), F(k)) for k half the size of m. Written iteratively with some bit shifting for division by 2, this should give you the theoretical O(log n) speed of exponentiation by squaring while staying entirely within integer arithmetic. (Well, O(log n) arithmetic operations. Since you will be working with numbers with roughly n bits, it won't be O(log n) time once you are forced to switch to a large integer library. After F(50), you will overflow the integer data type, which only goes up to 2^(31).)

(Apologies for not remembering Java well enough to implement this in Java; anyone who wants to is free to edit it in.)

abligh
  • 24,573
  • 4
  • 47
  • 84
  • 3
    This appears to be the only answer given which is *O(log n)* in time. – abligh Mar 28 '15 at 20:16
  • Please add the derivation of the above formula to your answer. – WestCoastProjects Mar 28 '15 at 21:10
  • 4
    The following site has a good discussion and derivation of this method. http://www.nayuki.io/page/fast-fibonacci-algorithms – cspirou Mar 28 '15 at 21:43
  • @abligh It does not. Fibonacci **requires** O(n) time because the nth fibonacci number has O(n) bits. If you only consider bounded numbers than you can achieve O(1) time trivially, and for unbounded numbers you can compute the nth fibonacci number using logn *multiplications*, but **they don't take O(1) time**! The total time is still O(n). – Bakuriu Mar 29 '15 at 12:54
  • I've added an answer using these identities. – abligh Mar 29 '15 at 16:19
  • @Bakuriu that isn't the conventional way of looking at things, as the loop `for (i=0; i – abligh Mar 29 '15 at 16:24
  • @abligh In the usual computational model addition takes O(1) time but multiplication takes O(log n) time. This is due to the fact that addition at most increases the number of bits by one, while multiplication can make the number of bits grow much more (which leads to unrealistic results). Note that, since fibonacci numbers, grow exponentially, a table of all fibonacci numbers representable in 64 bits would be pretty small. It would be a different thing if they grew less (but at that point the point I'm making wouldn't be valid because the result would have fewer bits). – Bakuriu Mar 29 '15 at 16:28
  • @abligh In fact only 93 fibonacci numbers can be represented in 64 bits. – Bakuriu Mar 29 '15 at 16:30
  • @Bakuriu Yes, to calculate `f(n)`, the time complexity can be `O(logn)`. While if using pre-compute from `f(1)` to `f(n)`, it requires `n*logn`. But once computed and saved in memory, following query `f(k), 1<=k<=n` requires only `O(1)`. – coderz Mar 30 '15 at 02:09
  • @Bakuriu Agree with you that 64 bits can only handle less than 100 Fibonacci numbers. In Java, we can use `BigInteger`. – coderz Mar 30 '15 at 02:11
9
  • Fibonacci(0) = 0
  • Fibonacci(1) = 1
  • Fibonacci(n) = Fibonacci(n - 1) + Fibonacci(n - 2), when n >= 2

Usually there are 2 ways to calculate Fibonacci number:

  1. Recursion:

    public long getFibonacci(long n) {
      if(n <= 1) {
        return n;
      } else {
        return getFibonacci(n - 1) + getFibonacci(n - 2);
      }
    }
    

    This way is intuitive and easy to understand, while because it does not reuse calculated Fibonacci number, the time complexity is about O(2^n), but it does not store calculated result, so it saves space a lot, actually the space complexity is O(1).

  2. Dynamic Programming:

    public long getFibonacci(long n) {
      long[] f = new long[(int)(n + 1)];
      f[0] = 0;
      f[1] = 1;
      for(int i=2;i<=n;i++) {
        f[i] = f[i - 1] + f[i - 2];
      }
      return f[(int)n];
    }
    

    This Memoization way calculated Fibonacci numbers and reuse them when calculate next one. The time complexity is pretty good, which is O(n), while space complexity is O(n). Let's investigate whether the space complexity can be optimized... Since f(i) only requires f(i - 1) and f(i - 2), there is not necessary to store all calculated Fibonacci numbers.

    The more efficient implementation is:

    public long getFibonacci(long n) {
      if(n <= 1) {
        return n;
      }
      long x = 0, y = 1;
      long ans;
      for(int i=2;i<=n;i++) {
        ans = x + y;
        x = y;
        y = ans;
      }
      return ans;
    }
    

    With time complexity O(n), and space complexity O(1).

Added: Since Fibonacci number increase amazing fast, long can only handle less than 100 Fibonacci numbers. In Java, we can use BigInteger to store more Fibonacci numbers.

coderz
  • 4,847
  • 11
  • 47
  • 70
  • 2
    Strong statements, if erroneous, and unconventional use of `"memorize"` vs. [_memoize_](http://en.wikipedia.org/wiki/Memoization). Care to read (and understand) other answers, esp. David E Speyer's? – greybeard Mar 29 '15 at 09:01
  • @greybeard Thanks for your reminder, here Memoization should be used. Answer edited. – coderz Mar 29 '15 at 09:37
  • @coderz `The most efficient implementation is` should be implementation using iterative approach. Follow @greybeard suggestion. – catch23 Mar 29 '15 at 10:01
  • 1
    @nazar_art Updated, DP is not the most efficient way, I removed the word "most". The most efficient way should be `O(logn)`. – coderz Mar 29 '15 at 11:45
  • @coderz No. Fibonacci numbers **require** O(n) time. If you are working with bounded integers than all these algorithms are O(1) (the fastest would be: create a table and just look it up). With unbounded integers you can compute it using log n *multiplications*, which are **not** O(1), they still require O(n) total time. – Bakuriu Mar 29 '15 at 12:55
  • @Bakuriu Yes, to calculate f(n), the time complexity can be O(logn). While if using pre-compute from f(1) to f(n), it requires n*logn. But once computed and saved in memory, following query f(k), 1<=k<=n requires only O(1). – coderz Mar 30 '15 at 14:49
7

Precompute a large number of fib(n) results, and store them as a lookup table inside your algorithm. Bam, free "speed"

Now if you need to compute fib(101) and you already have fibs 0 to 100 stored, this is just like trying to compute fib(1).

Chances are this isn't what this homework is looking for, but it's a completely legit strategy and basically the idea of caching extracted further away from running the algorithm. If you know you're likely to be computing the first 100 fibs often and you need to do it really really fast, there's nothing faster than O(1). So compute those values entirely out of band and store them so they can be looked up later.

Of course, cache values as you compute them too :) Duplicated computation is waste.

MushinNoShin
  • 4,695
  • 2
  • 32
  • 46
  • This doesn't really make anything faster, you would have to include the initialisation time in any benchmark. – Boris the Spider Mar 28 '15 at 15:44
  • You're assuming the initialization would *need* to take place as part of a benchmark. If we just need a fib(n) that returns quickly and we don't mind that it takes up a large amount of space, we can precompute this and package it with the code in some way (either hardcoded as a switch or mashed into some sort of key-value store out of process). Is it an elegant algorithm optimization? Nope! It's a complete "hack" for a specific sort of use case. All I hear is fast, and O(1) is as fast as it can get. Of course if I were doing this professionally I'd raise storage space concerns. – MushinNoShin Mar 28 '15 at 20:33
  • Sorry but "I would pre-calculate it and save it to a file" is not a valid algorithmic optimisation. – Boris the Spider Mar 28 '15 at 20:46
  • @BoristheSpider Algorithmic optimization, perhaps not. But in a practical application, you would certainly want to just hardcode the values. This is especially true if you're using a 32-bit or 64-bit integer (as opposed to an arbitrary-precision integer): [F_48 is greater than 2^32](https://www.wolframalpha.com/input/?i=F_47%2C+2%5E32%2C+F_48), and [F_94 is greater than 2^64](https://www.wolframalpha.com/input/?i=F_93%2C+2%5E64%2C+F_94), so you can't even return larger values. A simple `return FIBS[n] if n < MAX_FIB else OverflowError()` would be optimal for production use. – wchargin Mar 29 '15 at 18:41
5

Here is a snippet of code with an iterative approach instead of recursion.

Output example:

Enter n: 5
F(5) = 5 ... computed in 1 milliseconds
Enter n: 50
F(50) = 12586269025 ... computed in 0 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 2 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 0 milliseconds
Enter n: 500000
F(500000) = 2955561408 ... computed in 4,476 ms
Enter n: 500000
F(500000) = 2955561408 ... computed in 0 ms
Enter n: 1000000
F(1000000) = 1953282128 ... computed in 15,853 ms
Enter n: 1000000
F(1000000) = 1953282128 ... computed in 0 ms

Some pieces of results are omitted with ... for a better view.

Code snippet:

public class CachedFibonacci {
    private static Map<BigDecimal, BigDecimal> previousValuesHolder;
    static {
        previousValuesHolder = new HashMap<>();
        previousValuesHolder.put(BigDecimal.ZERO, BigDecimal.ZERO);
        previousValuesHolder.put(BigDecimal.ONE, BigDecimal.ONE);
    }

    public static BigDecimal getFibonacciOf(long number) {
        if (0 == number) {
            return BigDecimal.ZERO;
        } else if (1 == number) {
            return BigDecimal.ONE;
        } else {
            if (previousValuesHolder.containsKey(BigDecimal.valueOf(number))) {
                return previousValuesHolder.get(BigDecimal.valueOf(number));
            } else {
                BigDecimal olderValue = BigDecimal.ONE,
                        oldValue = BigDecimal.ONE,
                        newValue = BigDecimal.ONE;

                for (int i = 3; i <= number; i++) {
                    newValue = oldValue.add(olderValue);
                    olderValue = oldValue;
                    oldValue = newValue;
                }
                previousValuesHolder.put(BigDecimal.valueOf(number), newValue);
                return newValue;
            }
        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner(System.in);
        while (true) {
            System.out.print("Enter n: ");
            long inputNumber = scanner.nextLong();
            if (inputNumber >= 0) {
                long beginTime = System.currentTimeMillis();
                BigDecimal fibo = getFibonacciOf(inputNumber);
                long endTime = System.currentTimeMillis();
                long delta = endTime - beginTime;

                System.out.printf("F(%d) = %.0f ... computed in %,d milliseconds\n", inputNumber, fibo, delta);
            } else {
                System.err.println("You must enter number > 0");
                System.out.println("try, enter number again, please:");
                break;
            }
        }
    }
}

This approach runs much faster than the recursive version.

In such a situation, the iterative solution tends to be a bit faster, because each recursive method call takes a certain amount of processor time. In principle, it is possible for a smart compiler to avoid recursive method calls if they follow simple patterns, but most compilers don’t do that. From that point of view, an iterative solution is preferable.

UPDATE:

After Java 8 releases and Stream API is available one more way is available for calculating Fibonacci.

Checked with JDK 17.0.2.

Code:

public static BigInteger streamFibonacci(long n) {
    return Stream.iterate(new BigInteger[]{BigInteger.ONE, BigInteger.ONE},
                    p -> new BigInteger[]{p[1], p[0].add(p[1])})
            .limit(n)
            .reduce((a, b) -> b)
            .get()[0];
}

Test output:

Enter n (q for quit): 5
F(5) = 5 ... computed in 2 ms
Enter n (q for quit): 50
F(50) = 1258626902 ... computed in 0 ms
Enter n (q for quit): 500
F(500) = 1394232245 ... computed in 3 ms
Enter n (q for quit): 500000
F(500000) = 2955561408 ... computed in 4,343 ms
Enter n (q for quit): 1000000
F(1000000) = 1953282128 ... computed in 19,280 ms

The results are pretty good.
Keep in mind that ... just cuts all following digits of the real numbers.

catch23
  • 17,519
  • 42
  • 144
  • 217
4

Here's a way of provably doing it in O(log n) (as the loop runs log n times):

/* 
 * Fast doubling method
 * F(2n) = F(n) * (2*F(n+1) - F(n)).
 * F(2n+1) = F(n+1)^2 + F(n)^2.
 * Adapted from:
 *    https://www.nayuki.io/page/fast-fibonacci-algorithms
 */
private static long getFibonacci(int n) {
    long a = 0;
    long b = 1;
    for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
        long d = a * ((b<<1) - a);
        long e = (a*a) + (b*b);
        a = d;
        b = e;
        if (((n >>> i) & 1) != 0) {
            long c = a+b;
            a = b;
            b = c;
        }
    }
    return a;
}

I am assuming here (as is conventional) that one multiply / add / whatever operation is constant time irrespective of number of bits, i.e. that a fixed-length data type will be used.

This page explains several methods of which this is the fastest. I simply translated it away from using BigInteger for readability. Here's the BigInteger version:

/* 
 * Fast doubling method.
 * F(2n) = F(n) * (2*F(n+1) - F(n)).
 * F(2n+1) = F(n+1)^2 + F(n)^2.
 * Adapted from:
 *    http://www.nayuki.io/page/fast-fibonacci-algorithms
 */
private static BigInteger getFibonacci(int n) {
    BigInteger a = BigInteger.ZERO;
    BigInteger b = BigInteger.ONE;
    for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
        BigInteger d = a.multiply(b.shiftLeft(1).subtract(a));
        BigInteger e = a.multiply(a).add(b.multiply(b));
        a = d;
        b = e;
        if (((n >>> i) & 1) != 0) {
            BigInteger c = a.add(b);
            a = b;
            b = c;
        }
    }
    return a;
}
abligh
  • 24,573
  • 4
  • 47
  • 84
  • Thank you. I will test it and compare with other methods that were shared here. Note that with a long version of Fibonnaci we have a long overflow at F(104).. F(104) gives -7076989329685730859. So I will try the BigInteger version as well – alainlompo Mar 29 '15 at 16:11
  • 1
    @alainlompo I've added the `BigInteger` version for completeness. – abligh Mar 29 '15 at 16:18
  • that's neat!!! Thanks. Will use it – alainlompo Mar 29 '15 at 16:53
  • 1
    It may be worth noting that `numberOfLeadingZeros` is (in practice) JITed to a single x86 instruction. – wchargin Mar 29 '15 at 18:45
  • @user2812818 your (correct) fix got rejected before I got to it. multiply(x,y) was provided by a helper function I failed to put up (as in the reference). – abligh Apr 02 '16 at 04:00
4

Having followed a similar approach some time ago, I've just realized there's another optimization you can make.

If you know two large consecutive answers, you can use this as a starting point. For example, if you know F(100) and F(101), then calculating F(104) is approximately as difficult (*) as calculating F(4) based on F(0) and F(1).

Calculating iteratively up is as efficient calculation-wise as doing the same using cached-recursion, but uses less memory.

Having done some sums, I have also realized that, for any given z < n:

F(n)=F(z) * F(n-z) + F(z-1) * F(n-z-1)

If n is odd, and you choose z=(n+1)/2, then this is reduced to

F(n)=F(z)^2+F(z-1)^2

It seems to me that you should be able to use this by a method I have yet to find, that you should be able use the above info to find F(n) in the number of operations equal to:

the number of bits in n doublings (as per above) + the number of 1 bits in n addings; in the case of 104, this would be (7 bits, 3 '1' bits) = 14 multiplications (squarings), 10 additions.

(*) assuming adding two numbers takes the same time, irrelevant of the size of the two numbers.

AMADANON Inc.
  • 5,753
  • 21
  • 31
  • `z=2n`? 2z=n or z=n/2, rather. Your assignment of ordinals may differ from others', too: what is the value of F(0)? Your impression about usability is spot on: have another stare at David E Speyer's answer. – greybeard Mar 30 '15 at 19:10
  • @greybeard Thanks for that :) yes, you're right; z=n/2, and I started at F(0)=1, F(1)=1; I will try to fix my maths, if I can work it out. – AMADANON Inc. Mar 31 '15 at 00:12