3

I have designed an algorithm but confused whether the time complexity is theta(n) or theta (n^2).

def prefix_soms(number):
    Array_A=[1,2,3,4,5,6]
    Array_B=[]
    soms=0
    for i in range(0,number):
        soms=soms+Array_A[i]
        Array_B.insert(i,soms)
    return Array_B[number-1] 

I know the for loop is running n times so that's O(n).

Is the inside operations O(1)?

Federico klez Culloca
  • 26,308
  • 17
  • 56
  • 95
jelli
  • 33
  • 5
  • For arbitrary large numbers, it is not, since adding two huge numbers takes logarithmic time in the *value* of these numbers. For non-huge numbers, it is ok. But you can make your can make your code more elegant. – Willem Van Onsem Sep 15 '19 at 10:18
  • You can take a look at this: https://stackoverflow.com/questions/27073596/what-is-the-cost-complexity-of-insert-in-list-at-some-location – Stefan Falk Sep 15 '19 at 10:18
  • Would say inside list insertion operation is O(n) (see https://stackoverflow.com/questions/27073596/what-is-the-cost-complexity-of-insert-in-list-at-some-location) – DarrylG Sep 15 '19 at 10:19
  • @DarrylG: `.insert(..)` is *O(n)* with *n* the elements at the right side of the insertion point. Since we here always append at the right end, it is just an `.append(..)` (but I agree it is not elegant at all) – Willem Van Onsem Sep 15 '19 at 10:21
  • @DarrylG: No, since we insert at the *end* of the list, here, so an `.insert(last_index, value)` will still run in amortized cost *O(1)* :) – Willem Van Onsem Sep 15 '19 at 10:25
  • @WillemVanOnsem Yes, you're right. I hadn't noticed he was inserting at the end of the list which would be constant time as you say. – DarrylG Sep 15 '19 at 10:27
  • @WillemVanOnsem do you agree that a better faster method for what he's doing (i.e. performing an accumulation and maintaining intermediate results) is to use itertools.accumulate per https://stackoverflow.com/questions/12233788/reducelist-in-python-like-reduce-but-giving-the-list-of-intermediate-results? – DarrylG Sep 15 '19 at 10:31
  • @DarryIG: yes of course there are faster ways than using an extra list. Especially since this will consume extra memory, and hence can increase the amount of cache faults, etc. But time complexity-wise it does not change the performance. Of course time complexity does not take "details" into account such as cache faults, etc. – Willem Van Onsem Sep 15 '19 at 10:34
  • If one of the answers is helpful for you, you can accept it as the answer by clicking the tick icon at the left side and under the voting number. – Soner from The Ottoman Empire Sep 15 '19 at 13:35

3 Answers3

1

For arbitrary large numbers, it is not, since adding two huge numbers takes logarithmic time in the value of these numbers. If we assume that the sum will not run out of control, then we can say that it runs in O(n). The .insert(…) is basically just an .append(…). The amortized cost of appending n items is O(n).

We can however improve the readablility, and memory usage, by writing this as:

def prefix_soms(number):
    Array_A=[1,2,3,4,5,6]
    soms=0
    for i in range(0,number):
        soms += Array_A[i]
    return soms

or:

def prefix_soms(number):
    Array_A=[1,2,3,4,5,6]
    return sum(Array_A[:number])

or we can omit creating a copy of the list, by using islice(..):

from itertools import islice

def prefix_soms(number):
    Array_A=[1,2,3,4,5,6]
    return sum(islice(Array_A, number))

We thus do not need to use another list, since we are only interested in the last item.

Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555
  • @DeepSpace: it is of course a bit sad, that Python does not have list *fusion* (well I think it might be fundamentally impossible to implement that in a language that has mutable objects), and thus could indeed use slicing without overhead :) – Willem Van Onsem Sep 15 '19 at 10:26
1

Given that the insert method doesn't shift your array - that is as to your algorithm it solely appends one element to end of the list - its time complexity is O(1). Moreover, accessing an element with index takes O(1) time as well.

You run number of number time a loop with some O(1)s. O(number)*someO(1)s = O(number)

0

The complexity of list.insert is O(n), as shown on this wiki page. You can check the blist library which provides an optimized list type that has an O(log n) insert, but in your algorithm I think that the item is always placed at the end of the Array_B, so it is just an append which takes constant amortized time (You can replace the insert with append to make the code more elegant).