3

I'm reading a Introdution to Haskell course and they're introducing well known Tower of Hanoi problem as a homework for the first class. I was tempted and wrote a solution:

type Peg = String

type Move = (Peg, Peg)

hanoi :: Int -> Peg -> Peg -> Peg -> [Move]
hanoi n b a e
  | n == 1 = [(b, e)]
  | n > 1 = hanoi (n - 1) b e a ++ hanoi 1 b a e ++ hanoi (n - 1) a b e
  | otherwise = []

I've played with it a little and saw that it obviously using Tail Call Optimization since it works in constant memory.

Clojure is language that I work with most of the time and hence I was challenged to write a Clojure solution. Naive ones are discarded since I want to write it to use TCO:

(defn hanoi-non-optimized
  [n b a e]
  (cond
    (= n 1) [[b e]]
    (> n 1) (concat (hanoi-non-optimized (dec n) b e a)
                    (hanoi-non-optimized 1 b a e)
                    (hanoi-non-optimized (dec n) a b e))
    :else   []))

Well, Clojure is JVM hosted and thus don't have TCO by default and one should use recur to get it (I know the story...). In the other hand, recur imposes some syntactic constraints since it have to be last expression - have to be the tail. I feel a bit bad because I still can't write a solution that is short/expressive as that one in Haskell and use TCO at the same time.

Is there a simple solution for this which I can't see at the moment?

I have a great respect for both languages and already know that this is rather problem with my approach than with Clojure itself.

Will Ness
  • 70,110
  • 9
  • 98
  • 181
foki
  • 8,624
  • 6
  • 33
  • 32

1 Answers1

7

No, the Haskell code isn't tail-recursive. It is guarded-recursive, with recursion guarded by a lazy data constructor, : (to which the ++ calls are ultimately transformed), where because of the laziness only one part of the recursion call tree (a ++ b ++ c) is explored in its turn, so the stack's depth never exceeds n, the number of disks. Which is very small, like 7 or 8.

So Haskell code explores a, putting the c part aside. Your Clojure code on the other hand calculates the two parts (a and c, as b doesn't count) before concatenating them, so is double recursive, i.e. computationally heavy.

What you're looking for is not TCO, but TRMCO -- tail recursion modulo cons optimization, -- i.e. building the list in a top-down manner from inside a loop with a simulated stack. Clojure is especially well-suited for this, with its tail-appending conj (right?) instead of Lisp's and Haskell's head-prepending cons.

Or just print the moves instead of building the list of all of them.

edit: actually, TRMCO means we're allowed to reuse the call frame if we maintain the "continuation stack" ourselves, so the stack depth becomes exactly 1. Haskell in all likelihood builds a left-deepening tree of nested ++ thunk nodes in this case, as explained here, but in Clojure we're allowed to rearrange it into the right-nested list ourselves, when we maintain our own stack of to-do-next invocation descriptions (for the b and c parts of the a ++ b ++ c expression).

Will Ness
  • 70,110
  • 9
  • 98
  • 181
  • Thank you for the references. But still, when I compile Haskell function with `ghc-8.0.1` and run it with much more disks, like 9999 it never exceeds the memory. Actually, processor load acting weird but memory is constant and no stack overflow happens. As I understand, you say that it should cause stack overflow for larger `n`s? – foki Nov 05 '16 at 10:56
  • my understanding is, for `n` disks the recursion depth will be `n` as well. I expect 10000 isn't too much. Are you running the standalone executable with `+RTS -s` switch, for "stats"? There's also fuller profiling options, you must be able to find more about it on SO. – Will Ness Nov 05 '16 at 18:04
  • Can we see your Clojure TRMCO version? – Thumbnail Nov 07 '16 at 13:12
  • @Thumbnail this would have to be an explicit loop with simulated stack, *implementing by hand* what a TRMCO-able compiler would produce. I'll try to cook something up later. – Will Ness Nov 07 '16 at 19:13
  • Don Knuth may have anticipated this in [Structured Programming with Goto Statements](http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgramming1974.pdf) in 1974. Look at example 6 starting on p20/journal-page 280. – Thumbnail Nov 08 '16 at 01:00
  • @Thumbnail thanks a lot for the link! Looks very interesting. – Will Ness Nov 08 '16 at 07:53