6

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them.

for (int i = 0; i < n; i++)
    for (int j = 0; j < 2; j++)
        cout << i << " " << j << endl;

My first thought was that it is O(n2) because there are 2 for loops. Then I took a closer look and noticed that the inner for loop only goes from 0 to 2.

From what I understand, Big O looks at the worst case scenario. My thoughts were that if n = 2 then it would result in O(n2) however all other cases result on O(n) which is what is really confusing me.

Could someone please explain why this is O(n2) or O(n)?

Yu Hao
  • 119,891
  • 44
  • 235
  • 294
Jade L
  • 167
  • 2
  • 9
  • 4
    Big-O notation looks at *very large* n. Theoretically, Big-O has to do with the **limes** of the running time / number of operations, when n goes to **infinity** (so it gives "infinite running times" still a *comparable* "quantity"). So if you compare two algorithms, one is O(n), the second is O(n^2), that only means that when n approaches infinity, the first is faster. For practical numbers for n, this can mean something totally different, as it might be the case that the first is only faster for an n in the range of billions... – leemes Jun 18 '14 at 08:01
  • 1
    Ah and by the way: even if your algorithm is O(n), it still is O(n^2). That's because the Big-O is an *upper bound* for the asymptotic running time. There is Big-Theta which states an *exact* formular for the asymptotic running time, and Big-Omega is a *lower bound* for the asymptotic running time. Both are less popular because you typically want to express a lower bound for the running time, since you accept the case where the real running time is even faster. For details, see http://stackoverflow.com/q/464078/592323. – leemes Jun 18 '14 at 08:15

5 Answers5

10

It's O(n). When a function f(n) is O(n), it basically means that there are some constants A and B such for each n (at least for large-enough ns):

f(n) <= A * n + B

In other words, constants (multiplicative and additive) do not affect big-O notation. And since 2 is a constant (it does not depend on n), it is not reflected in the big-O notation.

In your case, the function f is the time complexity - i.e. how many constant-time steps the program takes for an input of size n.

Angew is no longer proud of SO
  • 167,307
  • 17
  • 350
  • 455
5

Technically, this is both, because Big O is an upper bound and everything that's O(n) is also O(n2).

But yes, this is O(n). Θ(n), in fact. (See this question for the distinction.)

Big O is not "the worst case scenario". Some algorithms can have "average case" and "worst case" time complexities that can be very different. Quick sort, for instance, is O(n log n) average case and O(n2) worst case. No one usually call quick sort an O(n2) algorithm, though.

Big O is a ceiling of how much the complexity of an algorithm grows as n increases. The complexity of an O(n) algorithm grows at most linearly. In your case, as the other answers explained, the inner loop is constant time, so the overall time complexity is a linear function of n - hence, O(n).

Community
  • 1
  • 1
T.C.
  • 133,968
  • 17
  • 288
  • 421
2

The only "handle" you have on this "algorithm" is n, and n is only used in the outer loop.

The inner loop is constant time complexity, so overall you have O(N).

juanchopanza
  • 223,364
  • 34
  • 402
  • 480
2

The inner cycle always performs two iterations no matter the input. Therefor it is constant. Now the outer cycle will perform n times the operations in the inner cycle. Thus its complexity is n times the complexity of the inner cycle. So overall you will perform n times a constant number of operations, or your complexity is O(n).

There is an experiment you can do: if the complexity of your algorithm is O(n^2) then the number of operations it will perform for n * 2 should be about 4 times the number of operations performed for n. So try this: if your algorithm performs K operations for n = 100 how many will it perform for n = 200?

Ivaylo Strandjev
  • 69,226
  • 18
  • 123
  • 176
2

Ο-Notation (Upper Bound):

This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or below c g(n).

Ο(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0}

enter image description here

This mean that saying your function is of order O(n2) or even O(n3) is true because it is an upper bound.

But your program depends only on n (as constants terms has no effect for large n, 2n is equivalent to n here) , therefore O(n) is very close upper bound and hence you can say its time complexity is of O(n).

haccks
  • 104,019
  • 25
  • 176
  • 264