The problem is that this will result in dramatic branching. Say you call fibonacci 5
, then this will result in the following call tree:
fibonacci 5
fibonacci 4
fibonacci 3
fibonacci 2
fibonacci 1
fibonacci 2
fibonacci 3
fibonacci 2
fibonacci 1
As you can see fibonacci 3
is called twice, fibonacci 2
is called three times and fibonacci 1
is called twice (in this very small example). There is a huge amount of overlap: you call the same function with the same arguments multiple times. This is of course inefficient: once you know that fibonacci 3
is 3
, there is no need to calculate it a second time. Of course in this very small example that's not really an issue: calculating fibonacci 3
is a matter of nanoseconds. But in case you have to calculate fibonacci 100
multiple times, this will have a dramatic impact. The number of redundant calls also scales exponentially (so this is not some small problem that only has some impact in the margin).
What you can do is work with accumulator(s): variables you pass recursively and update accordingly. For fibonacci, there are two such variables, the f(n-2) and f(n-1). You then each time calculate the sum of these two and shift so:
fibonacci :: Int -> Int
fibonacci 1 = a
fibonacci 2 = b
fibonacci n = fib' (n-2) a b
fib' 0 x y = x+y
fib' i x y = fib' (i-1) y (x+y)
In this case the call tree will look like:
fibonacci 5
fib' 3 1 2
fib' 2 2 3
fib' 1 3 5
fib' 0 5 8
This thus results into five calls (against nine calls for the original code). Of course the number of calls is not a guarantee on performance, since some methods do more work, but our approach scales linearly with n whereas the original approach scales exponentially with n. So even if the original method is thousands of times faster, eventually the difference in the number of calls will be that huge, that it will be outperformed.