Umh, let's first start to understand why the algorithm works, in order to "isolate" the ideas there.
The point of the algorithm is that if you have a majority element, then you can match each occurrence of it with an "another" element, and then you have some more "spare".
So, we just have a counter which counts the number of "spare" occurrences of our guest answer.
If it reaches 0, then it isn't a majority element for the subsequence starting from when we have "elected" the "current" element as the guest major element to the "current" position.
Also, since our "guest" element matches every other element occurrence in the considered subsequence, there are no major elements in the considered subsequence.
Now, since:
- our algorithm gives a correct answer only if there is a major element, and
- if there is a major element, then it'll still be if we ignore the "current" subsequence when the counter goes to zero
it is obvious to see by contradiction that, if a major element exists, then we have a suffix of the whole sequence when the counter never gets to zero.
Now: what's the idea that can be exploited in new, O(1) size O(n) time algorithms?
To me, you can apply this technique whenever you have to compute a property P
on a sequence of elements which:
- can be exteded from
seq[n, m]
to seq[n, m+1]
in O(1) time if Q(seq[n, m+1])
doesn't hold
P(seq[n, m])
can be computed in O(1) time and space from P(seq[n, j])
and P(seq[j, m])
if Q(seq[n, j])
holds
In our case, P
is the "spare" occurrences of our "elected" major element and Q
is "P
is zero".
If you see things in that way, longest common subsequence exploits the same idea (dunno about its "coolness factor" ;))