I have searched all over stack overflow and couldn't find a concise answer for this. Consider this code:
n = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12] ] # O(1)
for i in n: # O(n)
for j in i: # O(n)
print(j)
If my understanding of O()
is correct, the time complexity for this algorithm is 1+n*n
, and if we throw away the insignificant 1+
, it becomes n^2
, or quadratic time.
Now consider this code:
n = input() # O(1)
for i in range(0, len(n), 2): # O(n/2)?
print(n[i])
This time we use division instead of an exponent: 1+n/2
-> n/2
.
But everywhere i searched, people said O(n/2) == O(n)
, because O()
measures relative time, etc. But if O(n/2)
is absolute, and we should collapse it into O(n)
when using big O, doesn't that make types of time complexity like log n
and n!
incorrect too, because it would just mean modifying n with absolute values...?
n^2
== n
because it's the same as n/2
, but instead of /
we use ^
. n!
isn't correct because factorial is just a series of multiplication, so n!
is the same as n*(n-1)*...
which, i believe, would collapse into something completely different after throwing away all the -1s. Where am i wrong?
P.S. My understanding of logarithms and big O notation is very incomplete, please don't hate me if i'm missing something obvious.