33

I'm currently digesting the nice presentation Why learn Haskell? by Keegan McAllister. There he uses the snippet

minimum = head . sort

as an illustration of Haskell's lazy evaluation by stating that minimum has time-complexity O(n) in Haskell. However, I think the example is kind of academic in nature. I'm therefore asking for a more practical example where it's not trivially apparent that most of the intermediate calculations are thrown away.

tshepang
  • 12,111
  • 21
  • 91
  • 136
Nordlöw
  • 11,838
  • 10
  • 52
  • 99
  • 3
    There's a difference between the example being academic and the benefit of lazy evaluation being nonobvious. Why would you want the latter? Isn't it enough to see that real-world code can benefit from lazy evaluation? –  Oct 23 '11 at 19:33
  • "All problems in computer science can be solved by another level of indirection". Laziness is also a kind of indirection. – Will Ness Jul 05 '16 at 09:42
  • https://www.reddit.com/r/haskell/comments/5xge0v/today_i_used_laziness_for/?limit=500 has a bunch of nice examples – unhammer Mar 22 '17 at 07:50

3 Answers3

77
  • Have you ever written an AI? Isn't it annoying that you have to thread pruning information (e.g. maximum depth, the minimum cost of an adjacent branch, or other such information) through the tree traversal function? This means you have to write a new tree traversal every time you want to improve your AI. That's dumb. With lazy evaluation, this is no longer a problem: write your tree traversal function once, to produce a huge (maybe even infinite!) game tree, and let your consumer decide how much of it to consume.

  • Writing a GUI that shows lots of information? Want it to run fast anyway? In other languages, you might have to write code that renders only the visible scenes. In Haskell, you can write code that renders the whole scene, and then later choose which pixels to observe. Similarly, rendering a complicated scene? Why not compute an infinite sequence of scenes at various detail levels, and pick the most appropriate one as the program runs?

  • You write an expensive function, and decide to memoize it for speed. In other languages, this requires building a data structure that tracks which inputs for the function you know the answer to, and updating the structure as you see new inputs. Remember to make it thread safe -- if we really need speed, we need parallelism, too! In Haskell, you build an infinite data structure, with an entry for each possible input, and evaluate the parts of the data structure that correspond to the inputs you care about. Thread safety comes for free with purity.

  • Here's one that's perhaps a bit more prosaic than the previous ones. Have you ever found a time when && and || weren't the only things you wanted to be short-circuiting? I sure have! For example, I love the <|> function for combining Maybe values: it takes the first one of its arguments that actually has a value. So Just 3 <|> Nothing = Just 3; Nothing <|> Just 7 = Just 7; and Nothing <|> Nothing = Nothing. Moreover, it's short-circuiting: if it turns out that its first argument is a Just, it won't bother doing the computation required to figure out what its second argument is.

    And <|> isn't built in to the language; it's tacked on by a library. That is: laziness allows you to write brand new short-circuiting forms. (Indeed, in Haskell, even the short-circuiting behavior of (&&) and (||) aren't built-in compiler magic: they arise naturally from the semantics of the language plus their definitions in the standard libraries.)

In general, the common theme here is that you can separate the production of values from the determination of which values are interesting to look at. This makes things more composable, because the choice of what is interesting to look at need not be known by the producer.

Community
  • 1
  • 1
Daniel Wagner
  • 145,880
  • 9
  • 220
  • 380
  • Wow, convincing arguments---a promising future lies ahead for Haskell, IMHO. So the `<|>` operator is equivalent to (Emacs-)Lisp's `or` behaviour on objects which may always be `nil`, right? Anyway I really miss this behaviour in other languages. – Nordlöw Oct 23 '11 at 20:26
  • What do you mean by `thread pruning information`? Is it some kind of book-keeping of a specific tree's branch performance stats? – Nordlöw Oct 23 '11 at 22:47
  • 3
    @Nordlöw Yes, exactly. For example, when you first implement an AI and just want to bang something out, you might decide to do a brute-force n-ply search. But the function you write to do the n-ply search must do two tasks: one is searching the game tree, and the other is keeping track of how many ply you have already done. Later, when you decide to do alpha-beta pruning instead of brute force, since the concerns are intertwined in one function, you have your hands tied a bit. Laziness allows you to separate the concerns of searching the game tree and pruning the search tree. – Daniel Wagner Oct 24 '11 at 17:13
  • 1
    Of course, memoisation only gets simpler when it is indeed feasible to store _all_ possible outcomes. If you can only afford to memoise, say, the _n_ most-evaluated results (but don't know at compile-time which ones these will be), Haskell's purity actually makes it rather harder to do. – leftaroundabout Sep 07 '12 at 09:15
10

Here's a well-known example I posted to another thread yesterday. Hamming numbers are numbers that don't have any prime factors larger than 5. I.e. they have the form 2^i*3^j*5^k. The first 20 of them are:

[1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]

The 500000th one is:

1962938367679548095642112423564462631020433036610484123229980468750

The program that printed the 500000th one (after a brief moment of computation) is:

merge xxs@(x:xs) yys@(y:ys) =
  case (x`compare`y) of
    LT -> x:merge xs yys
    EQ -> x:merge xs ys
    GT -> y:merge xxs ys

hamming = 1 : m 2 `merge` m 3 `merge` m 5
  where
    m k = map (k *) hamming

main = print (hamming !! 499999)

Computing that number with reasonable speed in a non-lazy language takes quite a bit more code and head-scratching. There are a lot of examples here

Janak Nirmal
  • 22,706
  • 18
  • 63
  • 99
solrize
  • 101
  • 1
  • 2
5

Consider generating and consuming the first n elements of an infinite sequence. Without lazy evaluation, the naive encoding would run forever in the generation step, and never consume anything. With lazy evaluation, only as many elements are generated as the code tries to consume.

Phil Miller
  • 36,389
  • 13
  • 67
  • 90