Basically, you need to calculate the number of operations needed by the algorithm
depending on the input size. The Big O is then just the highest order part of the expression with constant factors ignored.
One does not care about the kind of operations (comparison, assignment, ...) as long as the time for the operation is constant.
For complex algorithms, that analysis can be difficult.
For binary search: with each search step, the number of values to be searched is reduced into its half. So twice the input requires one more search step (operation).
t(2n) = 1 + t(n)
. This results in t(n) = c ld n = O(log n)
, at least for the powers of two. For the other n
, the expression is more complex but the highest order part is still c ld n
.
For Fibonacci: the naive, recursive implementation requires you to calculate fib(n-1)
and fib(n-2)
for calculating fib(n)
. Hence you calculate fib(n-2)
twice, fib(n-3)
three times, fib(n-4)
five times and so on (following the Fibonacci series itself). So the number of calculations to do is 1 + 1 + 2 + 3 + 5 + ... + fib(n - 1) = fib(n) - 1
. Since we are interested in the asymptotic behavior (for big n
), we can apply the asymptotic approximation formula:

This means naive, recursive Fibonacci is O(a^n), i.e. exponential complexity.
A better algorithm starts from the beginning of the Fibonacci series and calculates each number once. That's obviously O(n), as it takes n
(or n - 2
) equal steps.