1

A lecturer gave this question in class: [question]

A sequence of n integers is stored in an array A[1..n]. An integer a in A is called the majority if it appears more than n/2 times in A.

An O(n) algorithm can be devised to find the majority based on the following observation: if two different elements in the original sequence are removed, then the majority in the original sequence remains the majority in the new sequence. Using this observation, or otherwise, write programming code to find the majority, if one exists, in O(n) time.

for which this solution was accepted [solution]

int findCandidate(int[] a)
{
    int maj_index = 0;
    int count = 1;
    for (int i=1;i<a.length;i++)
    {
        if (a[maj_index] == a[i])
            count++;
        else
            count--;

        if (count == 0)
        {
            maj_index =i;
            count++;
        }
    }
    return a[maj_index];
}

int findMajority(int[] a)
{
    int c = findCandidate(a);
    int count = 0;
    for (int i=0;i<a.length;i++)
        if (a[i] == c) count++;

    if (count > n/2) return c;
    return -1;//just a marker - no majority found
}

I can't see how the solution provided is a dynamic solution. And I can't see how based on the wording, he pulled that code out.

Irwin
  • 12,551
  • 11
  • 67
  • 97
  • The question is rather general. Include your reasoning why it's not a dynamic solution to give people something to respond to. – JOTN Dec 14 '10 at 23:17
  • This isn't dynamic programming. And I don't see anyone (including your professor) saying it is, except in the title of your question. – Keith Randall Dec 15 '10 at 01:03
  • Keith, my professor said it's dynamic, I didn't pull the question (and solution) out of the internet ether. – Irwin Dec 15 '10 at 19:06

3 Answers3

3

The origin of the term dynamic programming is trying to describe a really awesome way of optimizing certain kinds of solutions (dynamic was used since it sounded punchier). In other words, when you see "dynamic programming", you need to translate it into "awesome optimization".

MSN
  • 53,214
  • 7
  • 75
  • 105
2

'Dynamic programming' has nothing to do with dynamic allocation of memory or whatever, it's just an old term. In fact, it has little to do with modern meaing of "programming" also.

It is a method of solving of specific class of problems - when an optimal solution of subproblem is guaranteed to be part of optimal solution of bigger problem. For instance, if you want to pay $567 with a smallest amount of bills, the solution will contain at least one of solutions for $1..$566 and one more bill.

The code is just an application of the algorithm.

Victor Sergienko
  • 13,115
  • 3
  • 57
  • 91
0

This is dynamic programming because the findCandidate function is breaking down the provided array into smaller, more manageable parts. In this case, he starts with the first array as a candidate for the majority. By increasing the count when it is encountered and decreasing the count when it is not, he determines if this is true. When the count equals zero, we know that the first i characters do not have a majority. By continually calculating the local majority we don't need to iterate through the array more than once in the candidate identification phase. We then check to see if that candidate is actually the majority by going through the array a second time, giving us O(n). It actually runs in 2n time, since we iterate twice, but the constant doesn't matter.

Wade Tandy
  • 4,026
  • 3
  • 23
  • 31
  • 1
    "Breaking problem down into smaller subproblems" is the definition of Divide&Conquer approach, and is not even remotely sufficient for Dynamic Programming. In order to become Dynamic Programming it also has to memoize the solutions for subproblems and be able to retrieve the memoized solutions when a previously solved subproblem re-appears (so that no subproblem is solved more than once). In this algorithm it is rather non-obvious... I wonder if it is really a good example of Dynamic Programming (if at all). – AnT stands with Russia Dec 14 '10 at 23:46
  • I agree with Andrey. A characteristic of DP is that storage is consumed by the _dynamic_ part. This can be memoizing or a lookup table for example. I don't see anything dynamic here – John La Rooy Dec 15 '10 at 00:12
  • "Divide the problem instance into a number of subinstances (in most cases 2), recursively solves each subinstance separately and then combines the solutions to the subinstances to obtain the solution to the original problem..." -Alsuwaiyel. By that definition, I can't say the solution given above is Divide and Conquer. – Irwin Dec 15 '10 at 19:19
  • Agree with Irwin. According to wikipedia: "dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems." This algorithm is doing exactly that, by my estimation. – Wade Tandy Dec 15 '10 at 19:23
  • @WadeTandy Indeed, there's nothing wrong with DP using recursion and logic behind divide and conquer, after all when thinking about overlapping subproblems we will be thinking on how to divide our original problem. However, as AnreyT rightly pointed out DP involves some kind of memoization/lookup table. The great explanation of differences btw D&C and DP is here: http://stackoverflow.com/questions/13538459/difference-between-divide-and-conquer-algo-and-dynamic-programming – Victor Farazdagi Nov 16 '13 at 08:47