1

so I was wondering if any of you can give me tips regarding this. I've been doing some challenges like (the classical) making a method to calculate the nth number of a Fibonacci sequence using a single recursive call (aka. avoid return fibo(n-1) + fibo(n-2);).

I really scratched my head on that one and ended up looking at the solution that made use of a helper method -

public static int fibonacci(int n) {
    if (n < 2) {
        return n;
    }
    return fibonacci_helper(n, 1, 0);
}

public static int fibonacci_helper(int n, int previous, int current) {
    if (n < 1) {
        return current;
    }
    return fibonacci_helper(n - 1,  current, previous + current);
}

I'm not really sure what approach one takes to solve questions like that quickly (without first solving it iteratively and translating that to a tail recursion, which takes a lot of time).

Would really appreciate some tips, thanks in advance.

JakeO12
  • 13
  • 4
  • 1
    The classical and heavily inefficient recursive implementation uses the definition more directly. The one you’ve found is probably more efficient, and more complicated to understand. I myself would never use recursion at all for Fibonacci numbers. – Ole V.V. Aug 17 '20 at 06:52
  • 1
    Recursion is one tool that is useful for programmers to learn. However, most languages are designed to use iteration over recursion, which makes it more difficult to learn it. Had you learned LISP, then recursion would come more naturally than in Java. – NomadMaker Aug 17 '20 at 11:37

3 Answers3

2

You need to first decide if the question needs a recursive solution.Typically a recursion is needed when a present solution is dependent on some previous (already calculated) solution.

To start with , first check on small inputs(call them corner/base cases) . Then build on it (manually by dry running) on small inputs.Once you have done this, you can , in most cases , figure out the recurrence relation(like here in fibonacci).Test its validity , and then using base cases and current recurrence relation , write a recursion.

For example , the given code searches for a node with particular value in a binary tree(check out if you don't know what binary tree is: https://en.wikipedia.org/wiki/Binary_tree)

bool search(Node root,int val){

   if(root==null)//base case 1
     return false;
   if(root.value==val)//base case 2
     return true;
   return(search(root.left,val)||search(root.right,val));//recursing left and right subtrees for looking out for the value
}
Shubham Kumar
  • 92
  • 1
  • 12
0

The area you're thinking of is called Dynamic Programming. The way it works is that the solution to the larger problem you're trying to solve is composed of solutions to smaller problems, and the time complexity can be reduced dramatically if you keep those solutions and reuse them, instead of calculating them multiple times. The general approach to take is to consider how the problem can be broken down, and which solutions to the smaller problems you'll need to remember in order to solve it. In this case, you could do it in linear time and linear space by keeping all the results in an array, which should be pretty easy to think of if you're looking for a DP solution. Of course that can be simplified because you don't need to keep all those numbers, but that's a separate problem.

Typically, DP solutions will be iterative rather than recursive, because you need to keep a large number of solutions available to calculate the next larger one. To change it to use recursion, you just need to figure out which solutions you need to pass on, and include those as the parameters.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Lionel Foxcroft
  • 1,584
  • 3
  • 9
  • 23
0

Play with it on paper, and try discover hidden computations that are redone needlessly. Then try to avoid them.

Here you have f(n) = f(n-1) + f(n-2); obviously f(n-1) = f(n-2) + f(n-3) redoes f(n-2) needlessly, etc. etc. etc.. What if you could do the two at once?

Have f2(n) return two values, for n and for (n-1); then you do (in pseudocode)

f(n) = let { (a,b) := f2(n-1) } in (a+b)

Now you have two functions, none is yet defined, what good does it do? Turn this f into f2 as well, so it returns two values, not one, just as we expect it to do:

f2(n) = let { (a,b) := f2(n-1) } in (a+b,a)

And voila, a recursive definition where a is reused.

All that's left is to add some corner/edge/base case(s), and check for the off-by-1 errors.

Or even better, reverse the time arrow, start from the base case, and get your iterative version for free.

Recursion is a tool which is there to help us, to make problem solving easier.

Will Ness
  • 70,110
  • 9
  • 98
  • 181