3

I've written a simple Haskell program to solve a puzzle. The algorithm is correct and it produces the correct result for n = 40, which is 14466. However, for n = 100 the program gets so slow that I haven't even been patient enough to wait it out.

I don't understand why it's so slow, since I would expect it to cache all the results for the intermediate function calls, you know, with Haskell being a lazy language and all.

My Compiler is GHCi 7.6.3.

I've tried profiling, but this just tells me that 99.9% of the time is spent in the isLoosing function.

isLoosing :: Int -> Int -> Bool
isLoosing x y
    | y < x             = isLoosing y x
    | x == 0            = True
    | x == 1            = False
    | y `mod` x == 0    = False
    | otherwise         = and [ not (isLoosing (y-x*m) x) |
                                m <- [1..(y `div` x)] ]

loosingSum :: Int -> Int
loosingSum n = sum  [ x + y |
                        x <- [1..(n-1)],
                        y <- [(x+1)..n],
                        isLoosing x y == True ]

main = print $ loosingSum 40
wvdz
  • 16,251
  • 4
  • 53
  • 90

1 Answers1

0

I found out that one needs to use memoization to speed this up. I found a nice library to do this for me, like below. This code now runs in 0.12s, but unfortunately, this is still way too slow. Running it with n = 10000 is again taking pretty much infinate time, while this takes a couple of seconds with a comparable algorithm in C++. So I'm starting to think maybe Haskell isn't the solution to everything...

import Data.MemoCombinators (memo2,integral)

bigN :: Int
bigN = 100

isLoosingCached = memo2 integral integral isLoosing

isLoosing :: Int -> Int -> Bool
isLoosing x y
    | y < x             = isLoosingCached y x
    | x == 0            = True
    | x == 1            = False
    | y `mod` x == 0    = False
    | otherwise         = and [ not (isLoosingCached (y-x*m) x) |
                            m <- [1..(y `div` x)] ]

loosingSum :: Int
loosingSum = sum    [ x + y |
                        x <- [1..(n-1)],
                        y <- [(x+1)..n],
                        isLoosingCached x y == True ]
                where n = bigN

main = print $ loosingSum

This solution is C++ is like this. The biggest difference is it uses iteration rather than recursion.

#include <iostream>
#include <stdint.h>
#define N 10000
short STATES[N + 1][N + 1];
using namespace std;

int _tmain(int argc, _TCHAR* argv[])
{
    uint64_t sum = 0;
    for (int y = 0; y <= N; y++)
    {
        STATES[0][y] = 1;
        STATES[1][y] = -1;
    }
    for (int x = 2; x < N; x++)
        for (int y = x; y <= N; y += x)
            STATES[x][y] = -1;
    for (int x = 2; x < N; x++)
    {
        for (int y = x + 1; y <= N; y++)
        {
            if (STATES[x][y] == 0)
            {
                int k;
                for (int y_ = y - x; y_ > 0; y_ -= x)
                {
                    if (y_ > x)
                        k = STATES[x][y_] * -1;
                    else
                        k = STATES[y_][x] * -1;
                    if (k == -1)
                        break;
                }
                STATES[x][y] = k;
            }
            if (STATES[x][y] == 1)
                sum += x + y;
        }
        cout << sum << endl;
        system("pause");
    }
}
wvdz
  • 16,251
  • 4
  • 53
  • 90
  • 4
    "So I'm starting to think maybe Haskell isn't the solution to everything": BLASPHEMER! – crockeea Apr 19 '14 at 16:56
  • 4
    The biggest difference is that the C code pre-allocates a single array, but the haskell code constantly allocates new cons cells. That is, you're comparing two entirely different algorithms. – Carl Apr 19 '14 at 17:50