When is it better to use lazy evaluation instead of the eager one? Is it better when you know that the expression will be computed only once or maybe never?
-
1This is kinda hard to say in general, at least in any useful way. It depends on the language and the task. – Chuck May 05 '14 at 10:40
-
@Chuck there's this famous article that I link to in my answer, that makes a general argument... – Will Ness May 06 '14 at 15:38
-
@WillNess: I have read Why Functional Programming Matters and just skimmed it again to refresh my memory, and I don't think it contains a good answer to this question. It explains some general benefits of pervasive lazy evaluation, but AFAIK it doesn't explore the practical tradeoffs of lazy versus strict evaluation, or how to evaluate which would be better to use in a given situation (e.g. in Haskell, there are whole classes of problems that are eliminated by switching to strict evaluation). I agree the paper is illuminating and relevant, but it doesn't really answer the question posed here. – Chuck May 06 '14 at 17:27
-
@Chuck I've read this question as to what is better for *a programmer*, not as an *implementational trade-offs* question. – Will Ness May 06 '14 at 21:23
2 Answers
If you have the choice, use lazy evaluation for expressions that may not be evaluated at all or may lead to programming errors under certain circumstances when evaluated.
The classic case is implemented in most languages descending from C and is called "short circuit operators":
if (i != 0 && n/i > 100) ...
Here, n/i > 100
will only be computed when i
is not 0. Which is nice, since it avoids a zero divide error.

- 36,037
- 5
- 53
- 100
-
2even more basic example is that you do not need if as a special statement, but you can implement it as a standard function, if you pass thunks for "then" and "else" parameters. then of course you need some primitive to select which one is evaluated, which can be solved e.g. by boolean polymorphism (which is the case of e.g. SmallTalk) – jJ' May 06 '14 at 10:08
Why Functional Programming Matters is the quintessential argument in favor of lazy evaluation, mainly as a facilitator of improved modularity.
I can offer you as an example a lazy formulation for primes by sieve of Eratosthenes,
primes = (cons 2 . diff [3..] . bigU . map (\p-> [p*p, p*p+p..])) primes
(.
) is function composition, diff
is a set difference, bigU
finds union of (ordered) list of (ordered, increasing) lists of numbers, map
is map
, etc...., which without the lazy semantics would have to maintain all kinds of mechanics explicitly, mashed together, instead of using these nice separate modular functions chained together with a function composition.