1

Note that I have already read the question

Why not mark everything inline?

and

Can a recursive function be inline?

yet still feel that there is a unresolved edge case of interest here. Assume a language has the following:

  1. pure first-class functions whose parameters are treated as constants (no write-back)
  2. anonymous inline functions are supported (lambda abstraction)
  3. callbacks/continuations/coroutines/fibers as patterns
  4. tail-call optimisation
  5. macro expansion

then why bother allocating activation records at runtime when you could perform an inline expansion of all (mutually) recursive function definitions at compile time? This would seem to reduce the calling overhead to zero, as well as open up opportunities for parallel 'simplification' of the expressions abstracted behind each functions using standard computer algebra techniques where variables can remain unknowns during symbolic reduction - including evaluating multiple branches of a condition speculatively and then throwing away all but the valid result (something that would be made difficult by the von Neumann bottleneck imposed by a stack-based approach).

I appreciate that it would not be an optimisation to expand in-place all recursive function invocations as the code-bloat would not be able to take proper advantage of the CPU's cache, however I am only interested in using this for the features mentioned in 3. which I feel are pragmatic as they are of limited depth.

Community
  • 1
  • 1
  • In the answers to *Can a recursive function be inline?* I did not see a solution that produces a non-recursive output for all possible functions. – Bergi Mar 20 '14 at 18:18
  • @Bergi Thank you for your observation. Derek Park shows an example where a factorial _function definition_ is inlined. However, because it would fill up memory and make poor use of the cache this inlining of the **recursive calls** to factorial is only performed thrice after which it resorts to a normal non-inlined recursive call with better cache coherency at the cost of allocating an activation record on the stack. I took care to make my question centre upon the optimisation of a function definition, not the recursive calls that were within it. I'm also focusing on continuations, etc. in 3. – Uncompetative Mar 21 '14 at 14:56
  • Yes, exactly. So how would you think you could forgo activation records (assuming that tail-call optimisation is not available like with factorials)? – Bergi Mar 21 '14 at 15:00
  • This question seems rather vague. One technique you describe is actually used in some language implementations (look up "abstract interpretation"). But the rest I don't understand. Most of the time, there's no limit to the depth of recursion that can be calculated at compile time. – dfeuer Mar 21 '14 at 19:12
  • @Bergi I'm not saying that one could forgo activation records for all recursive function calls made within their function definitions, but that every definition and a pragmatically determined number of their recursive function calls could be expanded in place at the top level of the code - effectively resulting in one main function which amalgamated the majority of the code together into one optimised lump and a few rarely invoked recursive functions requiring allocation records. Continuations, etc. should incur zero overhead. – Uncompetative Mar 22 '14 at 17:08
  • @dfeuer I have read up on [abstract interpretation](http://www.di.ens.fr/~cousot/AI/IntroAbsInt.html) and found it fascinating, but unrelated. [Symbolic execution](http://en.wikipedia.org/wiki/Symbolic_execution) is a one of the Formal methods used to check program correctness, but my symbolic variables are more like those in term-graph rewriting systems such as [Mathematica](http://reference.wolfram.com/mathematica/tutorial/EverythingIsAnExpression.html) – Uncompetative Mar 22 '14 at 17:37

0 Answers0