12

I do not understand the O(2^n) complexity that the recursive function for the Longest Common Subsequence algorithm has.

Usually, I can tie this notation with the number of basic operations (in this case comparisons) of the algorithm, but this time it doesn't make sense in my mind.

For example, having two strings with the same length of 5. In the worst case the recursive function computes 251 comparisons. And 2^5 is not even close to that value.

Can anyone explain the algorithmic complexity of this function?

def lcs(xstr, ystr):
    global nComp
    if not xstr or not ystr:
        return ""
    x, xs, y, ys = xstr[0], xstr[1:], ystr[0], ystr[1:]
    nComp += 1
    #print("comparing",x,"with",y)
    if x == y:
        return x + lcs(xs, ys)
    else:
        return max(lcs(xstr, ys), lcs(xs, ystr), key=len)
Daniel Catita
  • 227
  • 1
  • 2
  • 7
  • You might want to include a specific algorithmic description you're looking at and trying to understand. – Nathaniel Ford Jan 08 '16 at 23:56
  • Big-O complexity dictates how a function grows for very large values, down to a constant factor. An algorithm can be O(1) and still take 100million operations. Or `O(n)` and take 50000 ops for `n=1`, and 10000 ops for `n=2`. It's all dependant how it grows as N grows arbitrarily large. Additionally, 5 is an extremely small `n` value. Without an algorithm, there's not much more to say. – CollinD Jan 08 '16 at 23:56
  • @CollinD but having two strings, whose sizes may even be different, what is n? – Daniel Catita Jan 08 '16 at 23:58
  • Im adding the algorithm to the post. – Daniel Catita Jan 08 '16 at 23:59
  • In this case, n is likely the length of the longer string. – CollinD Jan 09 '16 at 00:24

2 Answers2

12

To understand it properly look at the diagram carefully and follow the recursive top-to-down approach while reading the graph.

Here, xstr = "ABCD"
      ystr = "BAEC"

                                    lcs("ABCD", "BAEC")       // Here x != y 
                                  /                     \  
                lcs("BCD", "BAEC")   <--  x==y   -->    lcs("ABCD", "AEC")  x==y
                          |                                        |
                          |                                        |
                  lcs("CD", "AEC")   <--  x!=y   -->     lcs("BCD", "EC")
                 /                \                     /                \
                /                  \                   /                  \
               /                    \                 /                    \
      lcs("D","AEC")                  lcs("CD", "EC")              lcs("BCD", "C")
    /                \              /               \              /        \       
lcs("", "AEC")        lcs("D","EC")                  lcs("CD", "C")        lcs("BCD","")
  |        \         /              \                       |             /       |
Return     lcs("", "EC")    lcs("D" ,"C")            lcs("D", "")   lcs("CD","")  Return
           /         \       /         \             /        \       /        \ 
        Return      lcs("","C")    lcs("D","") lcs("","")  Return  lcs("D","")  Return
                     /     \         /     \      /                 /      \
                  Return   lcs("","")       Return            lcs("", "")  Return
                                 |                                  |
                              Return                            Return

NOTE: The proper way of representation of recursive call is usually done by using tree approach, but here i used the graph approach just to compress the tree so one can easy understand the recursive call in a go. And, of course it would be easy to me to represent.


  • Since, in the above diagram there are some redundant pairs like lcs("CD", "EC") which is the result of deletion of "A" from the "AEC" in lcs("CD", "AEC") and of "B" from the "BCD" in lcs("BCD", "EC"). As a result, these pairs will be called more than once while execution which increases the time complexity of the program.

  • As you could easily see that every pair generates two outcomes for its next level until it encounters any empty string or x==y. Therefore, if the length of the strings are n, m (considering the length of the xstr is n and ystr is m and we are considering the worst case scenario). Then, we will have number outcomes at the end of the order : 2n+m. (How? think)

Since, n+m is an integer number let's say N. Therefore, the time complexity of the algorithm : O(2N), which is not efficient for lager values of N.

Therefore, we prefer Dynamic-Programming Approach over the recursive Approach. It can reduce the time complexity to: O(n.m) => O(n2) , when n == m.

Even now, if you are getting hard time to understand the logic, i would suggest you to make a tree-like (not the graph which i have shown here) representation for xstr = "ABC" and ystr = "EF". I hope you will understand it.

Any doubt, comments most welcome.

surajs1n
  • 1,493
  • 6
  • 23
  • 34
  • Wow, great answer! What bugs me the most is the number of comparisons in the recursive algorithm being a number so diferent from 2^N. With dynamic-programming the n*m actually is also the number of comparisons, so one would think that this should ~always~ apply. – Daniel Catita Jan 09 '16 at 12:19
  • Can't we memoize the recursive LCS? – Jorge Barrios Sep 30 '17 at 16:11
2

O(2^n) means the run time is proportional to (2^n) for large enough n. It doesn't mean the number is bad, high, low, or anything specific for a small n, and it doesn't give a way to calculate the absolute run-time.

To get a feel for the implication, you should consider the run-times for n = 1000, 2000, 3000, or even 1 million, 2 million, etc.

In your example, assuming that for n=5 the algorithm takes a max of 251 iteration, then the O(n) prediction is that for n=50, it would take in the range of 2^(50)/2^(5)*251 = 2^45*251 = ~8.8E15 iterations.

Aganju
  • 6,295
  • 1
  • 12
  • 23
  • Ok, I got that, but having two strings, whose sizes may even be different, what is n? – Daniel Catita Jan 09 '16 at 00:01
  • As the longest common sub-sequence can never be longer than the shorter string, it would probably be the length of the shorter string. Not that it matters too much – Aganju Jan 09 '16 at 00:03
  • O(2^n) doesn't mean proportional to 2^n. – Paul Hankin Jan 09 '16 at 00:24
  • @Aganju would it not be the length of the longer string. Considering the case where string a is 2 chars, string b is 100 chars, makes me suspect it's the longer one. – CollinD Jan 09 '16 at 00:25