Let us use the name A
for the presented algorithm. Let us further assume that the input variable is n
.
Then, strictly speaking, A
is not in the runtime complexity class O(log log n)
. A
must be in (Omega)(n)
, i.e. in terms of runtime complexity, it is at least linear. Why? There is i*i
, a multiplication that depends on i
that depends on n
. A naive multiplication approach might require quadratic runtime complexity. More sophisticated approaches will reduce the exponent, but not below linear in terms of n
.
For the sake of completeness, the comparison <
is also a linear operation.
For the purpose of the question, we could assume that multiplication and comparison is done in constant time. Then, we can formulate the question: How often do we have to apply the constant time operations >
and *
until A
terminates for a given n
?
Simply speaking, the multiplication reduces the effort logarithmic and the iterative application leads to a further logarithmic reduce. How can we show this? Thankfully to the simple structure of A
, we can transform A
to an equation that we can solve directly.
A
changes i
to the power of 2 and does this repeatedly. Therefore, A
calculates 2^(2^k)
. When is 2^(2^k) = n
? To solve this for k
, we apply the logarithm (base 2) two times, i.e., with ignoring the bases, we get k = log log n
. The <
can be ignored due to the O
notation.
To answer the very last part of the original question, we can also look at examples for each iteration. We can note the state of i
at the end of the while loop body for each iteration of the while loop:
1: i = 4 = 2^2 = 2^(2^1)
2: i = 16 = 4*4 = (2^2)*(2^2) = 2^(2^2)
3: i = 256 = 16*16 = 4*4 = (2^2)*(2^2)*(2^2)*(2^2) = 2^(2^3)
4: i = 65536 = 256*256 = 16*16*16*16 = ... = 2^(2^4)
...
k: i = ... = 2^(2^k)