2

I'm new to Haskell, and I'm writing a paper on it for my Programming Languages class. I want to to demonstrate Haskell's laziness with some sample code, but I'm not sure if what I'm seeing is actually laziness.

doubleMe xs = [x*2 | x <- xs]

In ghci:

let xs = [1..10]
import Debug.Trace
trace (show lst) doubleMe (trace (show lst) doubleMe (trace (show lst) doubleMe(lst)))

Output:

[1,2,3,4,5,6,7,8,9,10]
[1,2,3,4,5,6,7,8,9,10]
[1,2,3,4,5,6,7,8,9,10]
[8,16,24,32,40,48,56,64,72,80]

Thanks for your time and help!

Nathan Jones
  • 4,904
  • 9
  • 44
  • 70
  • 6
    If you want to show laziness, why not just use an infinite list, then show computations on it (taking first `n` values, searching for a value that passes a predicate, etc.)? The two classic examples are primes and Fibonacci sequence. – Lily Ballard Dec 10 '12 at 20:43
  • Thanks for the suggestions. I'll look into those if I don't get an answer on the code I have now. – Nathan Jones Dec 10 '12 at 20:50
  • Why do you think your code sample demonstrates laziness? I think you may be confusing laziness with immutability. (I presume `lst` is supposed to be `xs`.) – dave4420 Dec 10 '12 at 20:55
  • You could be also interested in [Why is lazy evaluation useful?](http://stackoverflow.com/q/265392/1333025). One answer gives a [nice example](http://stackoverflow.com/a/284180/1333025): `head . quickSort` has only _O(n)_ complexity, because the rest of the sorted list is not computed. – Petr Dec 10 '12 at 21:37

4 Answers4

7

Your usage of trace here isn't particularly insightful, or in fact at all. All you do is print out the same list at four different points in the evaluation, that doesn't tell you anything about the actual state of the program. What actually happens here is that trace is forced in every doubling step before calculation even starts (when the result list is requested to weak head normal form). Which is pretty much the same as you would get in a language with fully strict evaluation.

To see some lazyness, you could do something like

Prelude Debug.Trace> let doubleLsTracing xs = [trace("{Now doubling "++show x++"}")$ x*2 | x<-xs]
Prelude Debug.Trace> take 5 $ doubleLsTracing [1 .. 10]
{Now doubling 1}
{Now doubling 2}
{Now doubling 3}
{Now doubling 4}
{Now doubling 5}
[2,4,6,8,10]

where you can see that only five numbers are doubled, because only five results were requested; even though the list that doubleLsTracing was given has 10 entries.

Note that trace is generally not a great tool to monitor "flow of execution", it's merely a hack to allow "looking into" local variables to see what's going on in some function.

leftaroundabout
  • 117,950
  • 5
  • 174
  • 319
5

Infinite streams are always a good example. You cannot obtain them in other languages without special constructs - but in Haskell they are very natural.

One example is the fibonacci stream:

fib = 0 : 1 : zipWith (+) fib (tail fib)

take 10 fib => [0,1,1,2,3,5,8,13,21,34]

Another one is obtaining the stream of prime numbers by using the trial division method:

primes = sieve [2..]
    where sieve (x:xs) = x : filter (not . (== 0) . (`mod` x)) (sieve xs)

take 10 primes => [2,3,5,7,11,13,17,19,23,29]

Also, implementing backtracking in Haskell is very simple, giving you the ability to obtain the list of solutions lazily, on demand:

http://rosettacode.org/wiki/N-queens_problem#Haskell

A more complex example, showing how you can implement min is the one found here:

Lazy Evaluation and Time Complexity

It basically shows how using Haskell's laziness you can obtain a very elegant definition of the minimum function (that finds the minimum in a list of elements):

minimum = head . sort

You could demonstrate Haskell's laziness by contrived examples. But I think it is far better to show how laziness helps you develop solutions to common problems that exhibit far greater modularity than in other languages.

Community
  • 1
  • 1
Marius Danila
  • 10,311
  • 2
  • 34
  • 37
2

The short answer is, "no". leftaroundabout explains that pretty well in his answer.

My suggestion is to:

  1. Read and understand the definition of lazy evaluation.
  2. Write a function where one of the arguments can diverge, an example that cannot work in your favorite strict (non-lazy) language (C, python, Java). For example, sumIfFirstArgIsNonZero(x, y), which returns x+y if x != 0 and 0 otherwise.
  3. For bonus points, define your own function ifThenElse that doesn't use Haskell's built-in if-then-else syntax, and explain why writing new control-flow structures in lazy languages is easy.

That should be easier than trying to wrap your head around infinite data streams or tying-the-knot tricks.

Itai Zukerman
  • 483
  • 2
  • 7
  • Most strict languages have non-strict if-then-else, so I'm not sure what the second point achieves. – Ben Millwood Dec 10 '12 at 21:17
  • 1
    Maybe returning `x*y` would make more sense, since then the behaviour at `x = 0` is consistent? – Ben Millwood Dec 10 '12 at 21:20
  • A slightly less artificial example could be: Define values that will diverge on some inputs using `let` or `where`, but only use those values after checking the input. This both avoids errors and avoids unnecessary computation, without nesting things in huge towers of conditional blocks. – C. A. McCann Dec 10 '12 at 21:41
2

The main point of laziness is that values that aren't needed aren't computed -- so to demonstrate this, you'll have to show things not being evaluated. Your example isn't the best to demonstrate laziness since all values are eventually computed.

Here's a small example, as a starting point:

someValueThatNeverTerminates = undefined -- for example, a divide-by-zero error

main = do (greeting, _) = ("Hello terminating world!", someValueThatNeverTerminates)
          putStrLn greeting

This says hello immediately -- if it weren't lazy, the whole thing would break halfway through.

amindfv
  • 8,438
  • 5
  • 36
  • 58