0

This is not a case where I can use both without problems, because I'm already convinced that loops are much easier to understand and I try to always use them. But then I stumble upon this (C++ function for binary search tree):

Node* Insert(Node* &rootptr,Node* data) {
    if (rootptr == nullptr) {
        rootptr = data;
    }
    else if (data->number <= rootptr->number) {
        rootptr->leftptr = Insert(rootptr->leftptr,data);
    }
    else {
        rootptr->rightptr = Insert(rootptr->rightptr,data);
    }
        return rootptr;
}

And my mind gets blown when I try to think how to make it through loops. Well, why to suffer then? Use recursion if it's the case. But the fact that my mind gets blown actually shows how harmful recursion is, because when you look at it you don't understand what it exactly does. Yes, it's neat, but it's a dangerous kind of neat, when it does several things at the same time and you don't really comprehend what is happening.

So in my opinion there are 3 cases: when recursion is simple and there's no reason to cope with it, when recursion is complex and you make your code unreadable, and of course some cases when there's no other way, so you just have to use it, like with Ackermann function. Why to use it then (besides some specific cases)?

Martian
  • 227
  • 1
  • 15
  • 1
    Representing functions recursively can be much easier to understand. – Telescope Jun 15 '20 at 16:07
  • 3
    Some data structures (like trees in your example) are naturally recursive, so processing them with recursive functions is much simpler than using loops. That's why your mind got blown - because recursion is natural in this case – ForceBru Jun 15 '20 at 16:09
  • @Telescope well yes, you should always use common and chose 1 recursion instead of 10 loops, but when I have a choice between 3 loops and 1 recursion, I'd take 3 loops. – Martian Jun 15 '20 at 16:09
  • @ForceBru but I don't really understand this function. Maybe because calling first argument a `root_ptr` is obscure, as it'll be a root pointer of tree only in the first entry to the function. – Martian Jun 15 '20 at 16:13
  • There are some good answers in here, I enjoyed reading https://stackoverflow.com/questions/5250733/what-are-the-advantages-and-disadvantages-of-recursion – Sercan Jun 15 '20 at 16:13
  • @AlexeiSavitsky, that's the whole point: it will be the root pointer of _each subtree_ of a given tree. And the beauty of recursion is that in this case your function deals with a tree made of only three nodes: the root, the left and the right nodes. It doesn't care if the _whole_ tree has like 1000 nodes: it will work anyway because all _subtrees_ are organised in the same way. – ForceBru Jun 15 '20 at 16:16
  • @ForceBru thanks, it's a great explanation! But recursion functions become pain when you want to make a tree as a class, and then recursive methods will look like `tree.input(tree.head, data)`. – Martian Jun 15 '20 at 16:19

2 Answers2

1

You are on the wrong track here: understanding recursion needs some studying, in order to get it right into your head, but once you get the grip, you will understand the elegancy of the system, so the danger will easily go away.

Does this mean that there's no danger at all in recursion? Well, there's one very important risk: each time a recursive function is called, it gets added to the call stack. When you're dealing with single recursion (like fact(n) = n*fact(n-1)), then the problem might be limited, but dealing with multiple recursion calls (like in a binary tree), the amount of function calls on the stack grows exponentially, which might blow up your call stack and cause your program to crash.

Dominique
  • 16,450
  • 15
  • 56
  • 112
  • So in cases where recursion is actually useful it often becomes dangerous? – Martian Jun 15 '20 at 16:29
  • Generally not: in order for the call stack to overflow you either need a lot of function calls, or there is a bug in your code. A binary tree with a depth of 20 might cause one millioin function calls on the stack, which is a lot, but which will not cause a crash (this might happen at a binary tree with a depth over 50, such trees are very rare (but useful to keep in mind)). – Dominique Jun 15 '20 at 16:51
1

Recursion can be a lot more clear. The only worry with recursion is that you can smash the stack if you use it too much. For example linux typically has a 2MB stack so you don't want to recurse millions of lines deep. Recursing a binary tree that is only O(log n) deep is probably fine.

In this case it is fairly simple to replace it with a loop like the following, but it is not always the case.

 Node* Insert(Node* &rootptr,Node* data) {
        Node** p=&rootptr;
        while ((*p) != nullptr) {
            if (data->number <= (*p)->number) {
                p=&(*p)->leftptr;
            }
            else {
                p=&(*p)->rightptr;
            }
        }
        (*p) = data;
        return data;
}
gmatht
  • 835
  • 6
  • 14
  • Why `(*p) = data` if we don't need `(*p)` then? Or maybe I got confused with double pointers. – Martian Jun 15 '20 at 16:47
  • 1
    `(*p) = data` is where it actually adds the node to the tree. – gmatht Jun 15 '20 at 16:52
  • _"The only worry with recursion is that you can smash the stack if you use it too much."_ Only for recursion that doesn't use tail calls. – cdhowie Jun 15 '20 at 16:53
  • @cdhowie well, tail-recursive functions are the ones that I called simple in the question and they are very easily replaced with loops. – Martian Jun 15 '20 at 16:55
  • @cdhowie But C++ compilers tend not to do tail call optimisation in debug mode. If you don't want your code to suddenly crash in debug mode, perhaps still best not to rely on tail call optimisation to avoid smashing your stack. https://stackoverflow.com/questions/34125/which-if-any-c-compilers-do-tail-recursion-optimization – gmatht Jun 15 '20 at 16:58