1

I'm new to Dynamic Programming and from my understanding Dynamic Programming is where you use the results that you calculated to check if your function is correct. I was asked during an interview to implement a method to check if n is power of two. So, I came up with this.

def isPowerOfTwo(self, n):
    power_of_two = [1]
    is_power_of_two = False
    if n == 0:
        return False

    if n == 1:
        return True

    while True:
        power_of_two.append(power_of_two[-1] * 2)
        if n == power_of_two[-1]:
            is_power_of_two = True
            break

        if power_of_two[-1] > n:
            break

    return is_power_of_two
toy
  • 11,711
  • 24
  • 93
  • 176
  • Short answer no, that's not [dynamic programing](https://en.wikipedia.org/wiki/Dynamic_programming). – Nir Alfasi Jul 15 '15 at 05:43
  • 2
    n != 0 and ((n & (n - 1)) == 0) can be use to test if n is a power of 2. For what is dynamic programming see https://en.wikipedia.org/wiki/Dynamic_programming –  Jul 15 '15 at 05:44
  • 2
    By the way, I think that you're confusing dynamic programing with [memoization](https://en.wikipedia.org/wiki/Memoization) – Nir Alfasi Jul 15 '15 at 05:47

2 Answers2

7

Wikipedia dynamic programming:

In mathematics, computer science, economics, and bioinformatics, dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems.

However, this mainly seems oriented towards optimization problems, and determining whether N is a power of 2 is not typically framed as an optimization problem.

From section "dynamic programming in computer programming":

There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems (emphasis mine). If a problem can be solved by combining optimal solutions to non-overlapping subproblems, the strategy is called "divide and conquer" instead. This is why mergesort and quicksort are not classified as dynamic programming problems.

Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its subproblems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into subpaths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices...

Overlapping subproblems means that the space of subproblems must be small, that is, any recursive algorithm solving the problem should solve the same subproblems over and over, rather than generating new subproblems. For example, consider the recursive formulation for generating the Fibonacci series: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive subtrees of both F43 as well as F42. Even though the total number of subproblems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each subproblem only once.

Although "is N / 2 a power of 2?" is a related subproblem to is "N a power of 2?" and we can write a routine that solves those kind of subproblems only once, we don't have the kind of overlap present in the Fibonacci sequence. If we did, recursion would not work very well. Here, it does. Adding memoization to the recursion is a kind of top-down DP technique, but is practically unnecessary here if O(log2 N) time is acceptable from recursion.

It looks like instead of breaking the problem into smaller pieces you built a list of powers of 2, (though you don't cache the list, you build it each time, but cacheing or not cacheing doesn't mean it is or isn't dynamic programming) and tested to see if the input was in the list, and if not, if it could be found by extending the list. While I think your test is OK, the connection to dynamic programming is more tenuous.

Here are some other ways to test this.

One way to test if a number is a power of 2 is to represent it in base 2, and make sure only one bit is set to one and the rest are zero. Many languages have a way to get an alternative base representation of an integer. A power of 2 has distinctive octal and hexadecimal string representations as well.

Another would be to return True for n==2, return False if non-integer or odd mod 2, otherwise test if n/2 is a power of 2 with recursion. A lot of mathematical dynamic programming is recursive.

def isPowerOf2(n):
        if n%2==1:
            return False
        if n==2:
            return True
        else:
            return isPowerOf2(n/2)

We could type-check it and memoize it like this:

powers_of_2 = [2]
def isPowerOf2(n):
    if type(n)!=type(1):
        raise TypeError("isPowerOf2(): integer value required")
    if n%2==1:
        return False
    if n in powers_of_2:
        return True
    if n<powers_of_2[-1]:
        return False
    else:
        result =  isPowerOf2(n/2)
        if result:
            powers_of_2.append(n)
        return result

and test this with an example as follows:

>>> import power2 # name of our python code script file
>>> power2.isPowerOf2(256)
True
>>> power2.powers_of_2
[2, 4, 8, 16, 32, 64, 128, 256]

For something shorter, along the lines of @Tris Nefzger's siuggestion, see: n & (n-1) what does this expression do? ; one could also examine log(N)/log(2) to see if it is close to an integer value, calculate 2 to that power, and test for equality. Neither of these last two methods are dynamic programming, but are possibly more apropos for such a simple task in practice.

Community
  • 1
  • 1
Paul
  • 26,170
  • 12
  • 85
  • 119
  • Your example looks like recursion. IMHO the starting point of DP is the opposite of recursion. It starts computing by `1` and ends with `n`. – sschmeck Jul 15 '15 at 06:43
  • @sschmeck recursion plus memoization is a pretty common way of implementing dynamic programming. – jonrsharpe Jul 15 '15 at 06:55
  • @jonrsharpe I've learned to distinct both approaches like here: http://stackoverflow.com/q/6164629/846163 – sschmeck Jul 15 '15 at 07:34
  • 2
    @sschmeck that doesn't seem like a terribly useful classification, given that dynamic programming can be bottom-up or top-down and implemented recursively or not. Also the memoization approach *does* start at `1`, that's the first subproblem it actually solves. – jonrsharpe Jul 15 '15 at 07:46
  • When we have to count the number of ways, It doesnt show up Optimal Substructure, It has rather a sub structure . I think that is more important – Shubham Sharma Jul 19 '15 at 08:34
1

Here is another approach than recusion and memoization - Bottum-up table. You calculate the result beginning with the smallest possible value and construct the bigger values upon. Sounds same like recursion but the behavoir differs. It's another starting point. See also Can recursion be dynamic programming?

def is_power_of_two(n):
    results = {}
    for i in range(1, n + 1):
        if i == 2:
            results[i] = True
        elif i % 2 != 0:
            results[i] = False
        else:
            results[i] = results[i/2]
    return results[n]

Advantages:

  • avoids problems with stackframes compared to recursion
  • allows optimization of stored values for huge dataset (when you call the function with a very big number and limited memory than you could clean up the results by removing all entries > i/2)
Community
  • 1
  • 1
sschmeck
  • 7,233
  • 4
  • 40
  • 67