1

I've been studying functional program optimization, and have been digging into the GHC source. I (mostly) understand what eta reduction and eta expansion are. Eta reduction only removes redundant lambdas:

\x -> abs x
=>
abs

Eta expansion is the opposite of eta reduction and does things like this (correct me if I'm incorrect):

abs
=>
\x -> abs x
-----------------------------------------------
foo = abs
=>
foo x = abs x
-----------------------------------------------
foo = bar 100
bar x y = x + y
=>
foo y = bar 10 y
bar x y = x + y
-----------------------------------------------
etc...

What I don't get is how they don't get in each other's way and send the compiler into an infinite loop. For example, first, a value is eta expanded, and then it is eta reduced, and so on. So, how do the two optimizations not get in each other's way?

xilpex
  • 3,097
  • 2
  • 14
  • 45
  • 2
    I'm no compiler designer, but I think the general idea is that each transformation is performed under particular conditions (where it's considered desirable), and the trick is to make sure that you make your decision and stick with it. – dfeuer Oct 23 '20 at 02:44
  • @dfeuer -- Naturally, that is also what I first thought, but I'm not sure what the conditions are and how efficient/reasonable "stick with the decision" would be. Also sticking with the decision seems like a sort of workaround to me, and I don't think the people working on GHC would do such a thing. – xilpex Oct 23 '20 at 02:50
  • 1
    AFAIU, GHC does not do eta-reduction/expansion regularly. This is because Haskell mandates `seq`, and because of that eta is unsound. E.g. let `f1, f2 :: Int->Int` be `f1 = undefined ; f2 x = undefined x`. Then eta would convert `f1` to `f2` (and vice versa) but `seq f1 () = undefined` while `seq f2 () = ()`. In Haskell, you can't always apply eta. – chi Oct 23 '20 at 07:48
  • Further, `f1 = let y = expensive in foo y` and `f2 x = (let y = expensive in foo y) x` might have a very different performance, unless some other optimization kicks in. The former could compute `expensive` only once and cache that result for the rest of the program. The latter could instead recompute `expensive` at each call. None of these two executions is (in all cases) more efficient than the other one. – chi Oct 23 '20 at 07:51
  • I think the general rule (in GHC and other compilers) is that every code transformation has a cost (e.g. inlining has a cost that scales with the size of the inlined code, eta expansion has a cost that scales with the size of the work being duplicated, etc.) and optimizing the code means optimizing the cost with the best transformations. – HTNW Oct 23 '20 at 17:33
  • 1
    @chi, GHC is very eager to do eta expansion in some cases. And by default it'll even do so when that can change semantics. See [-fpedantic-bottoms](https://downloads.haskell.org/ghc/latest/docs/html/users_guide/using-optimisation.html#ghc-flag--fpedantic-bottoms) – dfeuer Oct 30 '20 at 01:20

1 Answers1

0

I think I found an answer. I found a thesis from a contributor to GHC (don't remember what it was called), and in it, it mentioned that GHC doesn't do eta reduction. Instead, it does eta expansion, and beta reduction (IIRC); Beta reduction does most of the job.

xilpex
  • 3,097
  • 2
  • 14
  • 45