1

This Haskell solution has been described as recursive, but not quite efficient according to a few claims, so I have tried to rewrite it with that in mind. The problem is described in a course that is available online and whose homework was the Tower of Hanoi.

A new solution of mine:

type Peg = String
type Move = (Peg, Peg)

hanoi :: Integer -> Peg -> Peg -> Peg -> [Move]

hanoi n a b c = hanoiToList n a b c []
  where
    hanoiToList 0 _ _ _ l = l
    -- hanoiToList 1 a b _ l = (a, b) : l
    hanoiToList n a b c l = hanoiToList (n-1) a c b ((a, b) : hanoiToList (n-1) c b a l)

Let me explain the reasoning: if the function is going to end up as a list, then you had better compare it to another that takes an empty one as an argument, otherwise append it after it has been solved, which I believe to be the cause of inefficiency, thus leading to the same function being called in its place. The result has been checked and it's correct, but I wanted to know whether this is more efficient or simply redundant, considering the previous solution that I linked to.

EDIT: I commented a line that was unnecessary.

Marco_O
  • 99
  • 1
  • 8
  • 2
    This version is tail-recursive, which might make it more efficient, yeah. To know for sure, you could try benchmarking it with [criterion](http://hackage.haskell.org/package/criterion) or examining the optimised GHC Core output with `-O2 -ddump-simpl` (with `-dsuppress-uniques -dsuppress-module-prefixes` to make it more readable). – Jon Purdy Nov 08 '18 at 01:11
  • 1
    The solution you posted is, essentially, the standard recursive solution except that it uses [difference lists](https://hackage.haskell.org/package/dlist-0.8.0.5/docs/Data-DList.html) instead of regular ones. Difference lists admit a faster concatenation, which you exploit to gain performance. – chi Nov 08 '18 at 10:07
  • Funny that you mention that, because it's the next homework assignment in the Haskell tutorial that I'm following, which you may be familiar with. Perhaps it's meant to tell me that, when I learn something new, I should apply it to the previous exercise to improve it. Thank you Jon Purdy for criterion too. – Marco_O Nov 08 '18 at 11:19
  • @JonPurdy No, this is not tail-recursive. There is one recursive call in tail position (good), but there's also another one which is not in tail position. Perhaps you missed the second call. – chi Nov 08 '18 at 12:28
  • @chi yes, syntactically it is a *nested* recursion, but really because of Haskell's laziness it's *not* a double recursion like e.g. Fibonacci (where the nested call is forced by the strict `+`) but a guarded recursion (for the 2nd argument) -- the call is delayed (and not because of the `:`, but just because of the laziness). so it is very much like *tail* recursive tree traversal with explicit stack of "do-next" notes (either functions, or just the previously visited tree nodes themselves). when the leaf is reached the next action is called in tail position too. – Will Ness Nov 08 '18 at 14:48
  • @WillNess I completely agree. I still think it's not entirely correct to call it tail recursion. A key point is that, because of laziness, tail recursion is not that important. E.g. `map f (x:xs)=f x:map f xs` is not tail recursive, but it's much better than its tail recursive variant (and it works on infinite lists, too!). – chi Nov 08 '18 at 14:59
  • @chi (obligatory mention of [this canonical DL answer](https://stackoverflow.com/a/13879693/849891) by Daniel Fischer and also [this (my answer)](https://stackoverflow.com/a/14942678/849891) which is also somewhat relevant). DL is efficient because it is like a tree traversal. (and a very interesting - and very related - historical code from John McCarthy linked from [this](https://stackoverflow.com/a/43945275/849891) another answer of mine). ---- although in `map` it is guarded by an explicitly lazy *data constructor* - what *is* usually referred to as "guarded" recursion. – Will Ness Nov 08 '18 at 15:01
  • @chi shall we say, it builds guarded-recursive calls in a tail-recursive fashion. :) – Will Ness Nov 08 '18 at 15:13

1 Answers1

3

Yep, that gets rid of the inefficient copying of (:) constructors. It's a slightly non-idiomatic way to write it, but the more idiomatic way should have identical performance when compiled with optimizations.

hanoi n a b c = hanoiToList n a b c []
  where
    hanoiToList 0 _ _ _ = id
    hanoiToList n a b c = hanoiToList (n-1) a c b . ((a, b) :) . hanoiToList (n-1) c b a

That's the slightly more idiomatic approach. It's all about emphasizing the viewpoint that you're building up a transformation of a value. In practice it's identical as long as function composition is being inlined and simplified as expected.

Carl
  • 26,500
  • 4
  • 65
  • 86
  • Strange, I thought that, since you had an empty list as an argument of "hanoiToList", you had to put something in its place ('l' in my case), so that it matched. Is this correct? – Marco_O Nov 08 '18 at 01:26
  • @Marco_O nah. Functions are values that can be returned from other functions. If you never inspect an argument, it's often possible to never even name it. – Carl Nov 08 '18 at 02:33
  • Thank you! I noticed something that was off about my code. I'm going to edit it to show you the changes. – Marco_O Nov 08 '18 at 09:16
  • 1
    @Marco_O `hanoiToList 0 _ _ _ l = l` === `hanoiToList 0 _ _ _ = \l -> l` === `hanoiToList 0 _ _ _ = id`. it's the same thing. So even if you have `hanoiToList n a b c []` as your top call, what's being matched in the definition is `hanoiToList n a b c`, which is evaluated, returns a function, and that function gets applied to `[]`. looks different, but is the same. – Will Ness Nov 08 '18 at 15:21