3

My understanding (which may be incorrect or incomplete) is that lazy evaluation can provide all the advantages of tail recursion, and does one better.

If so, does it mean that tail recursion isn't needed in the context of lazy evaluation?

UPDATE

Specifically, let's look at the following example:

(define (foo f a)
  (if (number? a)
    (* a a)
    (lazy-map foo a)))

This function can be easily converted to a tail-recursion one. However, if so, we will lost the advantage of lazy-evaluation.

Actually, does this non-tail-recursive function need to consume many stacks when the input is a pretty large list (or infinite)? I don't think so. Then, are there good reasons to use tail-recursion rather than lazy evaluation?

象嘉道
  • 3,657
  • 5
  • 33
  • 49
  • 6
    It's not really clear what you're asking. Lazy evaluation is an **evaluation strategy** (which can often be implemented in user-space through the use of lexical closures), whereas tail-recursion is just a certain kind of recursive call. Even there, I assume you actually mean **tail-call elimination/tail-call optimization**, which allows tail-recursion to be optimized to run in less space. This permits recursive programs to not consume all the stack space. Can you provide any examples where you feel that lazy evaluation would remove the need for tail-call optimization, or vice versa? – Joshua Taylor Jan 08 '16 at 21:10
  • 1
    even if lazy evaluation exists, you may still need something to be calculated eagerly. in which case you'll use tail recursion with strictly calculated accumulating arguments. when you build your result lazily, i.e. with lazy constructors, guarded recursion is the way to go, putting the recursive call behind the lazy data constructor. for a Haskell answer see http://stackoverflow.com/questions/13042353/does-haskell-have-tail-recursive-optimization/13052612#13052612 – Will Ness Jan 08 '16 at 23:25
  • 1
    @xando: I edited the question and attempted to extract answerable questions that are as close as I can think of to the intent of your original question. Could you please edit this further to make it more useful to your situation. I will then vote to re-open it. – Arthur Ulfeldt Jan 08 '16 at 23:51
  • Tail recursion is just a while loop, it has nothing to do with lazy evaluation. Yet, if the language has lazy evaluation, this can be utilized to implement tail call optimization. For example, https://github.com/Frege/frege does this: instead of performing a dangerous tail call, a thunk can be returned that will perform the call when the value is needed, but then hopefully from a point where less stack space is used. – Ingo Jan 09 '16 at 01:19
  • They are kind of *opposite* things. Lazy evaluation usually only works when there is *non-tail* recursion. Lazy evaluation relies on the creation of a partial result (and recursion on the rest), which is non-tail recursion. Tail recursion would force the recursion to continue and not output a partial result, thereby preventing lazy evaluation. – newacct Jan 09 '16 at 02:25
  • 3
    The on hold vote is kind of puzzling to me. OP is clearly confused about recursion and laziness, but *that's the point* of the question. S/he is asking a legitimate question based on a confusion. The only reason it's "unclear what you're asking" is that OP doesn't understand yet. If s/he understood, there would be no question. The right response is to explain why s/he's confused--to dispel the misunderstanding. Some of the comments do that; to my mind they are proper answers. – Mars Jan 09 '16 at 04:27
  • 2
    @mars: 'does lazy evaluation do a better job than tail-recursion' - what does that mean? what 'better job'? what is a 'significant role' ? what 'advantages' does tail recursion provide? What about the disadvantages? That's all fully vague. No examples. No specifics. It's not even made clear that the concepts are related. why are numbers needed, given that symbols exist? I would expect to have a more specific question with more work put into it. why is Lisp tagged, given that it does not use lazy evaluation? Why not Haskell, given that it uses lazy evaluation. – Rainer Joswig Jan 09 '16 at 08:33
  • 3
    @Mars: 'why is A needed, given that B exists? Can A provide all the advantages of B? Does A do a better job than B? ...' That's a pattern for a multitude of useless questions, while providing ZERO context and no real programming problem. It's a low-quality question. – Rainer Joswig Jan 09 '16 at 08:44
  • @JoshuaTaylor http://www.cs.indiana.edu/cgi-bin/techreports/TRNNN.cgi?trnum=TR19 provides a (loose) connection, IMO (i.e. how [`LOOP`](http://www.lispworks.com/documentation/lw51/CLHS/Body/m_loop.htm) does list accumulation, in CL). [TRMC](http://en.wikipedia.org/wiki/Tail_recursion#Tail_recursion_modulo_cons) ~~= [guarded-recursion](http://wiki.haskell.org/Tail_recursion) under lazy evaluation. – Will Ness Jan 09 '16 at 21:26

1 Answers1

6

TL;DR Defining functions "tail-recursively" is not generally useful in lazy languages. You can get a stack overflow even in those cases (as opposed to strict languages with a proper tail-call optimization).

By using lazy evaluation of the function parameters (call by name) you lose the ability to express a simple iteration (using constant space) with a recursion.

For example let's compare two versions of length function. First let's look at the non-tail recursive function to compare lazy and non-lazy:

length [] = 0
length (head:tail) = length tail + 1

Strict evaluation of length [1, 2, 3]:

length [1, 2, 3] ->
  length (1:[2, 3]) = length [2, 3] + 1 ->
    length (2:[3]) + 1 = (length [3] + 1) + 1 ->
      (length (3:[]) + 1) + 1 = ((length [] + 1) + 1) + 1 ->
        ((length [] + 1) + 1) + 1 = ((0 + 1) + 1) + 1 ->
      (1 + 1) + 1 ->
    2 + 1
  3

Lazy evaluation:

length [1, 2, 3] ->
length (1:[2, 3]) = length [2, 3] + 1 ->

at this point reducing the + is needed and it needs to evaluate both arguments, the first being length [2, 3], so it continues like before, but on a list with one less parameter. Stack space is used for evaluating + (if we are considering a straightforward implementation).

Both versions thus use a stack, for the lazy one though, instead of length, + is the "recursive" function here.

Tail-recursive variant (using an accumulator):

length [] a = a
length (head:tail) a = length tail (a + 1)

Strict evaluation steps utilizing tail-call optimization:

length [1, 2, 3] 0 ->
length (1:[2, 3]) 0 = length [2, 3] (0 + 1) ->
length (2:[3]) 1 = length [3] (1 + 1) ->
length (3:[]) 2 = length [] 2 + 1 ->
length [] 3 = 3

This uses a constant space on a stack, no space on a heap

Lazy evaluation:

length [1, 2, 3] 0 ->
length (1:[2, 3]) 0 = length [2, 3] (0 + 1) ->
length (2:[3]) (0 + 1) = length [3] ((0 + 1) + 1) ->
length (3:[]) ((0 + 1) + 1) = length [] ((0 + 1) + 1) + 1 ->
length [] (0 + 1) + 1) + 1 = ((0 + 1) + 1) + 1 ->
magic -> 3

This uses a constant space on a stack until "magic" part, but builds-up delayed computations (additions) on a heap (generally). The part marked "magic" is where the sum of all ones happen and in a straightforward implementation it uses a stack. (Note that in this and similar cases, the optimizing evaluator may actually do the additions during the evaluation and then just return 3, you cannot really tell the difference, but the stack is not used. It can use other tricks to prevent a stack overflow like CPS).

Summary:

Lazy evaluated functions use a stack in the end in its straightforward implementation regardless whether they are written as tail-recursive or not.

There are however various tricks you need to apply to compilers to enable the optimizations of stack usage. They are not as straightforward as tail-call optimization in strict languages though.

Better option is to use those languages idiomatically and utilize the variety of high order functions that significantly reduce a need for recursive function definitions (and to some degree are in the lazy languages also for that reason).

jJ'
  • 3,038
  • 32
  • 25
  • I think that CPS generally consumes heap resources. Is this what you mean: "It can use other tricks to prevent a stack overflow like CPS"? – 象嘉道 Jan 11 '16 at 19:16
  • Yes, it trades stack space for heap space by having all calls in a tail position and passing the continuation as a parameter. But this remark was not really important for answering the question, only meant to hint that there are always ways to prevent stack overflow, but in this case probably quite expensive. – jJ' Jan 13 '16 at 03:52