5

I've got two Fibonacci methods I'm testing out. Both should be linear. Either I don't understand memoization or HashMap lookups are slower than I thought.

I understand that the recursive function shouldn't be sped up a lot by the addition of DP, but since it's static it only needs to compute once (never falls out of scope). After the first execution, it retrieves the answer from the HashMap.

I'm doing this in preparation for implementing the O(log n) fibonacci function and benchmarking, but got detoured slightly when I saw this. (Even addition memoization to the iterative approach slows it down).

Please enlighten me, I'm feeling quite stupid at the moment.

Iterative Approach:

public static int linearIterativeFib(int x) {
    if (x <= 2) {
        return 1;
    } else {
        int prev = 0;
        int result = 1;
        for (int i = 2; i <= x; i++) {
            int temp = result;
            result += prev;
            prev = temp;
        }
        return result;
    }       
}

Recursive Approach:

static Map<Integer, Integer> memo = new HashMap<Integer, Integer>();

public static int linearRecursiveFib(int x) {
    if (x <= 2) {
        return 1;
    } else if (memo.containsKey(x)) {
        return memo.get(x);
    } else {
        int result = linearRecursiveFib(x - 1) 
                + linearRecursiveFib(x - 2);
        memo.put(x, result);
        return result;
    }
}

And a unit test:

@Test
public void testSpeed() {
    int input = 35;
    long start = System.currentTimeMillis();
    for (int i = 0; i < 10000000; i++) {
        fib.linearIterativeFib(input);
    }
    System.out.println("Iterative approach took " 
            + (System.currentTimeMillis() - start) + "ms");

    start = System.currentTimeMillis();
    for (int i = 0; i < 10000000; i++) {
        fib.linearRecursiveFib(input);
    }
    System.out.println("Recursive approach took " 
            + (System.currentTimeMillis() - start) + "ms");
}

Returns:

Iterative approach took 6ms
Recursive approach took 132ms
Jonas Schreiber
  • 447
  • 4
  • 13
  • I'm guessing that it's because recursion involves pushing and popping stack contents. – Jo Kachikaran Apr 16 '16 at 21:24
  • It's entirely likely that there are significant performance differences, but benchmarking a JIT runtime requires careful measures. Restructure your benchmark and edit or post a new question if you still have a significant performance difference to examine. – chrylis -cautiouslyoptimistic- Apr 16 '16 at 21:34
  • I tried with and ```int[]``` instead of ```HashMap``` and results only differed by a factor of 2. Recursive still being slower. – Jorn Vernee Apr 16 '16 at 21:37

1 Answers1

0

First off, recursion in java is slower than iteration.

Instead of doing contains and then get, why not just get and do a null check? Hash and bucket needs to be calculated twice, not sure how big the perf impact is.

Autoboxing. It might be a good idea to use an external lib with primitive maps, like gnu Trove. Not sure what the perf gain will but you will allocate a lot less objects.

It might also be a good idea to check if the perf measurement is sound. In the recursive case, is the first or first few iterations way slower than the rest? The mean value might not be a good measurement. Would be nice to have a latency histogram.

Sason Ohanian
  • 795
  • 5
  • 16
  • Please don't recommend Trove, there are better options: http://java-performance.info/hashmap-overview-jdk-fastutil-goldman-sachs-hppc-koloboke-trove-january-2015/ – leventov Jul 07 '16 at 20:05