-1

I don't understand how these two implementations generating the look and say sequence have different times of execution because they perform really similar operations but more I increase n more the time difference between the two codes is bigger.

The two algorithms are both O(n^2) for the n-th number of the sequence since the while loop in the first one iterates n times as the for loop in the second one and the nested for loop in the first one scans all elements as the two nested while loops in the second one.

this is the first one :

def look_and_say_sequence(first, n):

    while n != 1 :
        
        succ, start, k = '', first[0], 0
        
        for x in first :
            if x != start :
                succ, start, k = succ + str(k) + start, x, 1
            else :
                k = k + 1

        first = succ + str(k) + start
        n -= 1

    return first

look_and_say_sequence('1', 48) # 10 SECONDS 

and this is the second one :

def look_and_say_sequence(first, n):    
    
    for _ in range(n-1):
    
        seq, res, ind = first, '', 0
        
        while ind < len(seq):
        
            pending, c = seq[ind], 0
            
            while ind < len(seq) and pending == seq[ind]:
                c += 1; ind += 1
            
            res += "{}{}".format(c, pending)
            
    return res

look_and_say_sequence('1', 48) # 2 SECONDS

So, how can it be that there is a factor of 5 between the two implementations?

Thank you for the help!

Tortar
  • 625
  • 5
  • 15
  • 2
    What are the times that confuse you? What have you done to track the algorithmic complexity ... such as counting loop iterations? – Prune Jan 24 '20 at 21:37
  • Well...They seem to have same loops even if they are written differently , am i wrong? – Tortar Jan 24 '20 at 21:40
  • This is not a site for "seem to" questions. [Analyze](https://stackoverflow.com/questions/3255/big-o-how-do-you-calculate-approximate-it) the implementations. We expect you to attempt a solution on your own before posting here. – Prune Jan 24 '20 at 21:44
  • So , they have for sure different Big O? – Tortar Jan 24 '20 at 21:46
  • _So , they have for sure different Big O?_ I'm guessing you're referring to the comment by @Prune? I don't think they were implying that at all. – AMC Jan 24 '20 at 21:50
  • 1
    That link is to give you the tools for analysis. You have not yet presented any hard data that suggests the two have different complexities. When you focus your question to such points, you may get answers. – Prune Jan 24 '20 at 22:08
  • 2 seconds vs. 10 seconds is not a big difference; if it was 2 seconds vs. 10 minutes then it would be quite likely that one algorithm is asymptotically faster, but a factor of 5 often just means that one algorithm is more optimised in practice. Try analysing the running times in more depth. – kaya3 Jan 24 '20 at 22:19

1 Answers1

1

Your slowdown lies in the line

next_element , start , k = next_element + str(k) + start , x , 1

which causes major runtime losses when next_element is a very long string (as certainly becomes the case as n grows large - next_element is over 500k characters long for n = 48). Try running the following two scripts:

import time

a = time.time()
s = ''
for i in range(99999):
  s = (s + '1') + '1' # comment this out
  # s += '1' + '1'  # and uncomment this to see the speed difference

b = time.time()
print(b-a)

you'll notice using the line using += runs dramatically quicker. Python evaluates left-to-right, meaning the (s + '1') has to evaluate out to a new string before appending the + '1'. This is what adds so much slowdown.

fwiw if you change that problem line to

next_element += str(k) + start
start = x
k = 1

you'll actually see your top algorithm runs faster

  • Thank you very much!!! The thing to understand is why a difference that seems just a matter of sintax makes different running time – Tortar Jan 24 '20 at 22:28
  • 1
    Updated the example to show why python behaves this way, hopefully that makes sense to you – Areeb Malik Jan 24 '20 at 22:44