13

Full laziness has been repeatedly demonstrated to cause space leaks.

Why is full laziness on from -O onwards? I find myself unconvinced by the reasoning in SPJ's The Implementation of Functional Programming Languages. The claim is that in

f = \y -> y + sqrt 4

sqrt 4 is unnecessarily repeated each time f is entered so we should float it outside the lambda. I agree in the small, but since we've seen what problems this transformation causes in the large I don't believe it is worth it. It seems to me that the benefits of this transformation are obtainable unilaterally** with only local code changes and programmers who want it should implement it by hand.

Can you convince me otherwise? Is full-laziness actually really useful? I will be especially convinced if you can provide examples which to implement by hand require multilateral cooperation or non-local transformations.

** unlike optimizations like inlining and stream fusion which to implement by hand would require multilateral cooperation between modules and non-local code changes

Tom Ellis
  • 9,224
  • 1
  • 29
  • 54
  • 1
    The `full-laziness` transformantion doesn't actually have anything to do with evaluation order. – Tom Ellis Feb 04 '16 at 21:39
  • Another possible point against full laziness is from [this thread](https://mail.haskell.org/pipermail/ghc-devs/2014-August/006195.html). – nh2 Mar 19 '18 at 13:10

1 Answers1

14

There's at least one common case where full laziness is "safe" and an optimization.

g :: Int -> Int
g z = f (z+1)
  where f 0 = 0
        f y = 1 + f (y-1)

This really means g = \z -> let {f = ...} in f (z+1) and, compiled that way, will allocate a closure for f before calling it. Obviously that's silly, and the compiler should transform the program into

g_f 0 = 0
g_f y = 1 + g_f (y-1)
g z = g_f (z+1)

where no allocation is needed to call g_f. Happily the full laziness transformation does exactly that.

Obviously programmers could refrain from making these local definitions that do not depend on the arguments of the top-level function, but such definitions are generally considered good style...

Another example:

h :: [Int] -> [Int]
h xs = map (+1) xs

In this case you can just eta reduce, but normally you can't eta reduce. And naming the function (+1) is quite ugly.

Reid Barton
  • 14,951
  • 3
  • 39
  • 49
  • Thanks for the example. The same would apply if `f` were an `Int`, especially if calculating it is expensive. However, I was thinking of a less violent transformation: `g = let {f = ...} in \z -> f (z+1)`. This would still be considered good style I think, even if it doesn't fit syntactically as nicely with Haskell's `where`. – Tom Ellis Jan 31 '16 at 20:24
  • Your second example, `h`, is more convincing because giving a name to `(+1)` is indeed quite ugly. I would assess it as being equivalent to SPJ's example with `sqrt 4`. I guess my conclusion is then that `full-laziness` saves you from death by a thousand cuts. One could in principle produce optimized versions by hand but it would be inefficient to do so everywhere. – Tom Ellis Feb 01 '16 at 19:43
  • 2
    Note that floating out lambdas should *always* be beneficial. Floating thunks or partial applications might have the ugly side-effects (retaining huge data structures that are cheap to reconstruct) you want to eliminate. It's a shame that GHC misses a flag for that. – Sebastian Graf Jul 10 '18 at 11:56
  • @SebastianGraf I'm not sure what you mean that "floating out lambdas should always be beneficial". Might not a lambda capture thunks in a way that has ugly side effects? Or did you mean something else by "lambda", perhaps "lambda with no free variables"? – Tom Ellis Oct 19 '21 at 09:47
  • I meant that in floating out a lambda, you are floating out a normal-form, something that is cheap to eval. (Contrary to what I wrote above, I think floating out PAPs is OK, too, for the same reason.) If you float out a thunk it might *appear* as if that saves a lot of work when in reality it might be really cheap to recompute if it wasn't bound as a thunk in the first place. And if we enter the thunk at most once anyway, we haven't actually saved any work. – Sebastian Graf Oct 19 '21 at 13:43
  • Of course you are right in that I completely disregarded closure growth in my assessment above. Any floated out binding means that the closure from whence it came will grow. – Sebastian Graf Oct 19 '21 at 13:43