1

I am a beginner, and i have this fundamental doubt...
What is the Big-O runtime of this simple algorithm ?
Is it O(n2) [ cause of two for loops ] or O(n3) [ including product of two numbers which itself should be O(n) ] ?

MaxPairwiseProductNaive(A[1 . . . n]):
product ← 0
for i from 1 to n:
for j from i + 1 to n:
product ← max(product, A[i] · A[j])
return product
Lone_Wolf
  • 13
  • 3

1 Answers1

2

Both your first and second loops are O(n), so together they are O(n^2).

Inside your loop, neither the max or multiplication depend on the number of array elements. For languages like C, C++, C#, Java, etc., getting a value from an array does not add time complexity as n increases, so we say that is O(1).

While the multiplication and max will take time, they are also O(1), because they will always run at a constant time, regardless of n. (I will note that the values here will get extremely large for arrays containing values >1 because of the max, so I'm guessing some languages might start to slow down past a certain value, but that's just speculation.)

All in all, you have

O(n):
    O(n):
        O(1)

So the total is O(n^2)

Verification:

As a reminder, while time complexity analysis is a vital tool, you can always measure it. I did this in C# and measured both the time and number of inner loop executions.

Executions vs n

That trendline is just barely under executions = 0.5n^2. Which makes sense if you think about low numbers of n. You can step through your loops on a piece of paper and immediately see the pattern.

  • n=5: 5 outer loop iterations, 10 total inner loop iterations
  • n=10 10 outer loop iterations, 45 total inner loop iterations
  • n=20 20 outer loop iterations, 190 total inner loop iterations

For timing, we see an almost identical trend with just a different constant. This indicates that our time, T(n), is directly proportional to the number of inner loop iterations. enter image description here

Takeaway:

The analysis which gave O(n^2) worked perfectly. The statements within the loop were O(1) as expected. The check didn't end up being necessary, but it's useful to see just how closely the analysis and verification match.

mcky
  • 823
  • 2
  • 7
  • 20
  • 1
    Thanks @mcky | In one of the tutorial I was following they remarked addition of two numbers would be O(n). So they are wrong. It would be O(1) like multiplication, right ? – Lone_Wolf Jul 26 '22 at 08:45
  • That is correct. Addition will scale with how many numbers you are adding, so if you are adding 2 numbers, it will always be `O(1)`. If you decided to add n numbers together, you would end up with `O(n)` – mcky Jul 26 '22 at 08:49
  • As another quick note on the theory behind this, we assume multiplication and addition are O(1) because we have constant word size (8,16,32,.. bits) which won't increase. If we had a system that used a non-constant bit length to store numbers, our time complexity would increase as the number of digits increases, but we can take the worst case of say a 512 bit number (which is absurdly large) and say adding those is O(1), so anything smaller is also O(1). – mcky Jul 26 '22 at 09:01