I want to find the complexity of the code below. I used this code to find the second highest element in the array using sorting.
for(i=0;i<2;i++)
{
for(j=0;j<n;j++)
{
//some code
}
}
Is the complexity O(2n) or O(n2)?
I want to find the complexity of the code below. I used this code to find the second highest element in the array using sorting.
for(i=0;i<2;i++)
{
for(j=0;j<n;j++)
{
//some code
}
}
Is the complexity O(2n) or O(n2)?
It is a very vast topic. I am just putting my effort to bring it to you. rest you refer some good books for it. My recommendation in Coreman
.
Complexity :
basic structure of a for loop is
for(initialization;condition;updation)
in updation we are updating values, so basically we are iterating the loop upto the condition.
so it is like
n*(n+1)/2
which is basically O(n^2) in your two for loop case.
Estimation of Complexity:
Sometimes it is not easy to get a formula for the complexity of an algorithm. In such cases it may be possible to estimate it by experiment. Counting-variables can be added to the program, incremented when some critical operation is carried out and the final totals printed. The running time can also be measured, either by a stop-watch or better by calling a routine to print the computer system's clock. The complexity might be inferred by examining how such measures vary with the problem size.
The accuracy of timing a program or an operation can be improved by timing a number of executions, perhaps in a loop, and dividing the total time taken by that number. A time-shared computer is used by many people simultaneously. The elapsed time taken by a program depends on the system load. Therefore any timing done on a shared machine must be based on the central processor time devoted to the particular program under study and not on the elapsed time.
Examining differences between adjacent terms in a series can indicate the form of the underlying function that defines the series. A linear function, T(n)=a*n+b
gives rise to constant difference between T(n)
and T(n-1)
:
D1(n) = T(n)-T(n-1) = a*n+b-a*(n-1)-b = a
A quadratic function T(n)=a*n2+b*n+c
gives rise to linear first-order differences:
D1(n) = T(n)-T(n-1) = a*n2+b*n+c-a*(n-1)2-b*(n-1)-c = 2a*n-a+b
which gives rise to constant second-order differences D2(n) = D1(n)-D1(n-1)
. In general, a polynomial of degree d is revealed by constant dth-order
differences.
The best way to know the solution is to draw a table:
Iteration | i | j
----------+--------+-------
0 | 0 | 0
0 | 0 | 1
0 | 0 | 2
0 | ... | ...
0 | ... | ...
0 | ... | n - 1
1 | 1 | 0
1 | 1 | 1
1 | ... | ...
1 | ... | ...
1 | ... | n - 1
How many times it is executed? That's the answer..
If you want to have an intuition you should pick some n
, run an example.. Then choose another n
and see what you get, finally you'll conclude what's the answer.
if "some code" does o(1) then the complexity of this code is O(2n)
that's because the inner code is in complexity of o(n), and we do this loop for 2 times. then it's O(2n)
Big Oh notation gives order of magnitude estimates, differences in constants really do not affect the magnitude of an algorithm, so O(2n) = 2O(n) = (n).
Similar to 1000 >> 10 = 5. That is, 1000 is much bigger than 10 and it is just as "bigger" than 5 as it is for 10, even though 10 is twice the value of 5.