-1

I'm searching for more Pro and contras for haskells lazy method

Pro

  • possibility of infinite data structure (e.g. take 5 [1..] of fibs)
  • higher performance: dont do more than necessary (e.g. head (map (2 *) [1 .. 10]) calculate only the first 2)

Contra

  • I will be more advanced, because there is a strict order
  • Debuggen will also be more difficult
  • The prediction of the required amount of memory and speed will also be harder

Kind regards

Dieter
  • 2,499
  • 1
  • 23
  • 41
  • Pro: less memory usage, lists don't have to be stored in their entirety – Ben Ruijl Dec 27 '12 at 16:57
  • 2
    The performance argument isn't that clear-cut: While lazy-evaluation can lead to asymptotic improvements of performance, it usually has worse constant factors. That is, if for a given algorithm, lazy evaluation does not lead to asymptotic improvements, its performance will often be worse because of the book-keeping costs that laziness entails. – sepp2k Dec 27 '12 at 16:57
  • 1
    @BenRuijl Lists don't **always** have to be stored. Often they do though (albeit maybe not in their entirety). And, like in my previous comment, it has to be mentioned that in cases where laziness can't cut down on the number of elements that have to be stored, memory consumption will actually be worse (by constant factors) because of all the thunks that have to be created. – sepp2k Dec 27 '12 at 17:00
  • I find debugging easier in Haskell than any other language, and I make fewer programming errors too, so I disagree with dubugging harder. (I'd put strong, advanced, felxible and expressive type system as one of Haskell's advantages, but you could easily point out that that's not about lazy evaluation.) – AndrewC Dec 27 '12 at 20:01

1 Answers1

4

First of all, lazy evaluation wasn't invented in Haskell, it's incorrect to attribute it like so.

Second, Haskell has eager evaluation (the opposite; evaluate when referred) possibilities too.

Third, lazy evaluation facilities are easily available in other languages and technologies too; Python's generators (recall the xrange function), Meyers singleton and template instantiation in C++, delayed symbol resolution in runtime linkers -- are all examples of this idea.

So anyway, accomodating that idea and corresponding vocabulary will never be harmful to a software engineer.

As to the pros & cons, you named the primary ones. There can be named a few more (remember, you can do these in virtually any language with data structures and function calls):

  • Recursive datastructures, where you can create, for example, a list value with elements arranged in a cirle, the head being the next element of the "last" one; traversing such a list would yield an infinitely repeating sequence of elements. Probably not the most motivating example, but you can do the same with trees, graphs and so on.

  • Arranging control flow with lazy data structures instead of built-in primitives; think of building coroutines with just lazy lists. This is actually the other side of the coin of more complex and convoluted evaluation order (i.e. one of your contra being an advantage).

  • Semi-automatic parallelisation of computations. This is more of the referential transparency's advantage rather than lazy evaluation; but still, the features fuse together very organically.

  • Performance-wise, memoization often comes to mind when musing on lazy evaluation; although doing it automatically is a hard (probably still unsolved) problem with lots of details and pitfalls.

So, basically, if you look at it deeper, every aspect comes with possibilities and trade-offs; your task as a software engineer is to know all of these and choose wisely based on concrete problem details.

Community
  • 1
  • 1
ulidtko
  • 14,740
  • 10
  • 56
  • 88