I was looking at the responses to this question by @AndreyT and I had a question regarding the memory efficiency of classical DFS vs. stack based DFS. The argument is that the classical backtracking DFS cannot be created from BFS by a simple stack-to-queue replacement. In doing the BFS to DFS by stack-to-queue replacement, you lose the space efficiency of classical DFS. Not being a search algorithm expert (though I am reading up on it) I'm going to assume this is just "true" and go with it.
However, my question is really about overall memory efficiency. While a recursive solution does have a certain code efficiency (I can do a lot more with a few lines of recursive search code) and elegance, doesn't it have a memory (and possibly performance) "hit" because of the fact it is recursive?
Every time you recurse into the function, it pushes local data onto the stack, the return address of the function, and whatever else the compiler thought was necessary to maintain state on return, etc. This can add up quickly. It also has to make a function call for each recursion, which eats up some ops as well (maybe minor...or maybe it breaks branching predictability forcing a flush of the pipeline...not an expert here...feel free to chime in).
I think I want to stick to simple recursion for now and not get into "alternative forms" like tail-recursion for the answer to this question. At least for now.