One of the amazing things about Haskell is the Spineless Tagless Graph Machine (STG). At the time, (according to SPJ) it was written to enable the laziness that enabled functional programming to work on the limited hardware of the time. Subsequently it has been shown to have any additional benefits for functional programming.
When we see languages like Scheme - recursion features prominently (as it does in Haskell). Part of what makes this particularly effective is the feature known as Tail Call Elimination. The reason you want Tail Call Elimination is because if a function calls itself 1000's of times, it can blow the stack, even stackoverflow. [sic]
Now Haskell has some interesting options for Tail Call Elimination. What we see is that:
Haskell's recursive functions aren't evaluated in a very recursive way! The only stack of calls hanging around are the multiplications themselves. If (*) is viewed as a strict data constructor, this is what's known as guarded recursion (although it is usually referred to as such with non-strict data constructors, where what's left in its wake are the data constructors - when forced by further access).
My question is: Are the Haskell options for Tail Call Optimisation indicative of the Spineless Tagless Graph-machine?