19

I'm looking for a Haskell compiler that uses strict evaluation by default instead of lazy evaluation. I would just use OCaml, but Haskell's syntax is so much better than OCaml's (and Haskell is pure, and has cool features such as type classes).

I'd really rather not constantly put !s and $!s all over my program. A compiler with a switch or a preprocessor to put in the strictness annotations would be really nice. It would also be helpful if there was a way to use lazy evaluation in certain places too, just in case I want something like an infinite list (I probably never will).

Please do not try to convince me that lazy evaluation is better, I really need the performance. IIRC, Simon Peyton Jones even said that lazy evaluation wasn't really necessary, it was there mostly to prevent them from making the language impure.

Zifre
  • 26,504
  • 11
  • 85
  • 105
  • 1
    If such a preprocesssor exists (which I don't know), it would probably mean you'd have to recompile every library you use, since those are all lazy (and are written to work in a lazy environment). I'd guess most of the Haskell libraries would break, if suddenly used with strict evaluation. – Tom Lokhorst Jun 15 '09 at 13:26
  • @Tom Lokhorst: Certainly some things would break, but I would expect most things to work correctly unmodified. – Zifre Jun 15 '09 at 13:28
  • 6
    Do you have an example where lazy evaluation causes really bad performance? – ShreevatsaR Jun 15 '09 at 13:53
  • 2
    I certainly do. Something to do with only 6 GB of memory in my machine and a space leak. If you're willing to sign an NDA, I'd be extremely happy to let you help me with this next time it happens. – cjs Jun 15 '09 at 18:49
  • I doubt I'd be able to help, NDA or not. I was hoping to learn something from your example, but a solely existential statement is not very enlightening. :-) – ShreevatsaR Jun 16 '09 at 00:47
  • @ShreevatsaR: See this Stack Overflow question for a really simple example of where this matters: (http://stackoverflow.com/questions/412919/how-does-haskell-tail-recursion-work). It's not always obvious what lazy evaluation is doing behind your back. This is even tail-recursive, and it causes a stack overflow! – Zifre Jun 16 '09 at 13:39

11 Answers11

17

I'd really rather not constantly put !s and $!s all over my program

You're doing it wrong, if that's how you're programming Haskell :) You simply won't need to do this. Use GHC, use -O2, use strict data types when appropriate, use lazy ones when appropriate. Don't assume laziness is going to be a problem - it is a solution to a lot of problems.

Don Stewart
  • 137,316
  • 36
  • 365
  • 468
16

If you have a Haskell compiler that uses strict evaluation, it doesn't compile Haskell. Laziness Non-strictness is part of the Haskell spec!

However, there are alternatives.

  • DDC is an attempt to create an explicitly lazy variant of Haskell which supports things like destructive update whilst retaining all the rest of Haskell's goodness. There is one problem: the compiler is currently only in the α-stage, although it seems to be at least usable.

  • Create a preprocessor, as others have done.

  • Learn to use Haskell “the right way”. If you can simplify your test case down to something which is publicly-displayable, you could post it on the Haskell-Café mailing list, where people are very helpful with these sorts of questions concerning the effects of non-strictness.

porges
  • 30,133
  • 4
  • 83
  • 114
  • 1
    DDC looks really nice. Now I have to master the effect system instead of monads... – Zifre Jun 16 '09 at 13:11
  • 4
    To be precise, non-strictness is a part of the Haskell spec, not laziness. Laziness is simply the evaluation order chosen by GHC. – Jake McArthur Apr 26 '11 at 04:09
12

There have been two attempts at strictly evaluating Haskell in the past:

But both were focused on sticking to Haskell's non-strict semantics but using a mostly-strict evaluation strategy, rather than actually changing the semantics, and neither ever really saw the light of day.

Edit: Martijn's suggestion of strict-plugin looks ideal for your purposes as it actually does what you want and the author is still active in the Haskell community, I'd forgotten about it.

phunehehe
  • 8,618
  • 7
  • 49
  • 79
Ganesh Sittampalam
  • 28,821
  • 4
  • 79
  • 98
8

See also ghc-strict-plugin, an example for GHC's plugin framework, described in the Monad Reader 12.

Community
  • 1
  • 1
Martijn
  • 6,713
  • 3
  • 31
  • 38
7

I feel your pain. My biggest PITA in my day-to-day programming is dealing with those !@#$%^&( space leaks.

However, if it helps, with time you do learn (the hard way) about how to deal with this, and it does get better. But I'm still waiting for Andy Gill to come out with his magical space leak profiler to fix all of my problems. (I'm taking his off-hand comment to me at the last ICFP that he'd dreamed up this cool idea as a promise to implement it.)

I won't try to convince you that lazy evaluation is the best thing in the world, but there are certain good points about it. I've got some stream-processing programs that scoot lazy lists through any variety of combinators that run happily on gigabytes of data while using only 3.5 MB or so of memory (of which more than 2MB is GHC runtime). And someone smarter than I am pointed out to me last year that you would really be quite surprised, as a typical Haskell programmer, how much you depend on lazy evaluation.

But what we really need is a really good book on dealing with lazy evaluation in the real world (which is not so different from the academic world, really, except they simply don't get a paper published, and we get clients coming after us with knives) that will properly cover most of the issues relating to this and, more importantly, give us an intuitive sense of what's going to explode our heap and what isn't.

I don't think that this is a new thing; I'm sure other languages and architectures have been through this too. How did the first programmers to deal with hardware stacks and all that, after all? Not so well, I bet.

cjs
  • 25,752
  • 9
  • 89
  • 101
  • 2
    BTW, you can, with the help of Template Haskell, make all sorts of things instances of `NFData`, and get strict out the wazoo, many, many times over. I learned the hard way that this is not the best solution to anything except truly blowing out your CPU cache.... – cjs Jun 15 '09 at 18:45
5

I recently saw some work in this area:

https://ghc.haskell.org/trac/ghc/wiki/StrictPragma

You can hear a tiny bit about it in SPJ's GHC status update here:

http://youtu.be/Ex79K4lvJno?t=9m33s (Link starts at the relevant piece at 9:33)

Jon Smock
  • 9,451
  • 10
  • 46
  • 49
5

Using nfdata and rnf everywhere isn't a solution since it means repeatedly traversing large structures that have already been evaluated.

The introductory chapter of Ben Lippmeier's PhD thesis (about DDC) is about the best critique of Haskell that I've seen--it discusses issues of laziness, destructive update, monad transformers, etc. DDC has laziness but you have to request it explicitly, and it's considered an effect, which is tracked and managed by DDC's type-and-effect system.

solrize
  • 51
  • 1
  • 1
5

I think that Jan-Willem Maessan's pH compiler is/was strict. The next closest is Robert Ennal's speculative evaluation fork for ghc 5. The spec_eval fork is not strict, but instead optimistically evaluates. I don't know if either of those are still current/usable/etc.

shapr
  • 1,676
  • 13
  • 18
1

I'm looking for a Haskell compiler that uses strict evaluation by default instead of lazy evaluation.

Such a compiler would not be a Haskell compiler. If you really want, you could consider putting {-# LANGUAGE Strict #-} pragmas in your files. This will work with GHC 8.0.2, 8.2.2, and 8.4.1, aka the last three releases of the compiler.

It would also be helpful if there was a way to use lazy evaluation in certain places too, just in case I want something like an infinite list

There is no such method. Instead, use GHC as it was intended - as a lazy language. Learning to think about your code, profile, and use functional data structures correctly will be far more useful than mindlessly applying strictness pragmas everywhere. GHC already has a strictness analyzer.

(I probably never will).

That's exactly what the authors of llvm-hs thought when they chose to use a strict state monad rather than a lazy one. Instead, it caused an unexpected bug down the road. Laziness and recursion go hand-in-hand.

Please do not try to convince me that lazy evaluation is better, I really need the performance.

I'm dubious this is actually what you want when it doesn't reliably increase performance of Haskell code while simultaneously breaking existing code and making existing resources useless. If this is how you intend to write programs, please just use OCaml or Scala and leave the Haskell community alone.

IIRC, Simon Peyton Jones even said that lazy evaluation wasn't really necessary, it was there mostly to prevent them from making the language impure.

That is not true. You can read more on the actual history of Haskell here

0

There is also seqaid, which aims at the middle of the lazy-strict spectrum.

Seqaid is a GHC plugin providing non-invasive auto-instrumentation of Haskell projects, for dynamic strictness (and parallelism) control. This will soon include optimisation for automated space leak relief using minimal strictification.

phunehehe
  • 8,618
  • 7
  • 49
  • 79
0

You clearly have made up your mind on the value of strict evaluation, but I think you are missing the point of using Haskell. Haskell's lazy evaluation allows for much more flexible optimization strategies to be employed by the compiler/interpreter. Forcing your own strictness overrides the optimizer. In the end, using excessive strict evaluation will never be as efficient as the automated optimization. Try a folding sum over a sequence of numbers in GHCI, with and then without lazy evaluation. You can see the difference quite clearly -- in this case lazy evaluation is always faster.

  • 1
    All the serious Haskell people like SPJ, etc. now basically agree that, while lazy-by-default made for an important research exercise, strict-by-default is the right approach because programmers need to reason about the performance of their code. We're basically spinning our wheels until some Haskell successor like Idris, etc. picks up enough developers, researchers, etc. to replace Haskell as "the next big research language." All those new research languages are strict-by-default because they learned from Haskell. Now you can always use laziness in those languages. – Jeff Burdges Dec 24 '14 at 06:06
  • 1
    @JeffBurdges please give a link/resource for your claim that SPJ advocates "strict by default". While I know that he is well aware of the downside of laziness, I have never actually heard him say that he regrets "lazy by default". – hasufell Jun 26 '15 at 18:47
  • 1
    I never asserted that *anyone* regretted lazy evaluation in Haskell, only that lazy evaluation was understood as a topic that needed to remain a pure research area, and languages meant for deployment should use strict evaluation. In practice, strictness is done wrong far too often by GHC to compete with human programmers. I think this comes under Haskell's slogan 'avoid success at all costs'. – Jeff Burdges Jun 29 '15 at 18:55
  • 1
    In fact, there is a quote of SPJ going further and saying "the next Haskell will be strict" here : https://news.ycombinator.com/item?id=1924061 There is zero regret about making Haskell lazy-by-default, but merely observations that: (1) they learned what they can easily learn from laziness, (2) compilers have failed to adequately exploit laziness, although that remains an interesting research topic, and (3) to pursue their real goal of purity they should build a successor to Haskell that's not lazy. Idris is one such attempt. – Jeff Burdges Jun 29 '15 at 19:08
  • @JeffBurdges that is not at all an accurate account of SPJ's views and it is not the consensus either. It's ill-informed hogwash. –  Apr 09 '18 at 01:58
  • Well compound data structures such as lists and trees and sets should be lazy by default, while simple data structures should be strict by default to save the thunks... Hmm, sounds a lot like how C or Python handles call-by-value and call-by-reference... – aoeu256 Aug 26 '19 at 04:47