Recursive algorithm is an algorithm that implemented in accordance with Divide & Conquer strategy, where solving each intermediate sub-problem produces 0, 1 or more new smaller sub-problems. If these sub-problems are solved in LIFO order, you get a classic recursive algorithm.
Now, if your algorithm is known to produce only 0 or 1 sub-problem at each step, then this algorithm can be easily implemented through tail recursion. In fact, such algorithm can easily be rewritten as an iterative algorithm and implemented by a simple cycle. (Needless to add, tail recursion is just another less explicit way to implement iteration.)
A schoolbook example of such recursive algorithm would be recursive approach to factorial calculation: to calculate n!
you need to calculate (n-1)!
first, i.e. at each recursive step you discover only one smaller sub-problem. This is the property that makes it so easy to turn factorial computation algorithm into a truly iterative one (or tail-recursive one).
However, if you know that in general case the number of sub-problems generated at each step of your algorithm is more than 1, then your algorithm is substantially recursive. It cannot be rewritten as iterative algorithm, it cannot be implemented through tail recursion. Any attempts to implement such algorithm in iterative or tail-recursive fashion will require additional LIFO storage of non-constant size for storing "pending" sub-problems. Such implementation attempts would simply obfuscate the unavoidable recursive nature of the algorithm by implementing recursion manually.
For example, such simple problem as traversal of a binary tree with parent->child links (and no child->parent links) is a substantially recursive problem. It cannot be done by tail-recursive algorithm, it cannot be done by an iterative algorithm.