I'm trying to improve my programming logic skills and I was watching one of the videos on how to approach Fibonacci numbers.
After looking at the pseudo code at 6:34
I wrote this:
In [14]: def my_fib(x, memo=dict()):
...: if memo.get(x):
...: return memo[x]
...: if x == 1 or x == 2:
...: result = 1
...: else:
...: result = my_fib(x - 1, memo) + my_fib(x -2, memo)
...: memo[x] = result
...: return result
Which works great however when I watched the video to the end when the guy reviled his python code, I discovered that it was slightly different to mine.
CS Dojo code:
In [68]: def fib_dyn_2(x, memo):
...: if memo[x] is not None:
...: return memo[x]
...: if x == 1 or x == 2:
...: result = 1
...: else:
...: result = fib_dyn_2(x-1, memo) + fib_dyn_2(x-2, memo)
...: memo[x] = result
...: return result
...:
...: def fib_memo(x):
...: memo = [None] * (x + 1)
...: return fib_dyn_2(x, memo)
There is "slight" difference I use dictionary for caching he uses list.
What got me is that my code appears to be a little bit faster. When getting to numbers in the sequence X >= 100
and as well when running the same number is the sequence more than once.
i.e. My code:
In [4]: %time my_fib(100)
CPU times: user 70 µs, sys: 44 µs, total: 114 µs
Wall time: 92 µs
Out[4]: 354224848179261915075L
CS Dojo code:
In [5]: %time fib_memo(100)
CPU times: user 99 µs, sys: 128 µs, total: 227 µs
Wall time: 187 µs
Out[5]: 354224848179261915075L
Question is which one is "better" or more desired as an answer?