The internet has a lot of resources explaining Big-O for log-factorial function (e.g. 1, 2) which is O(n log(n))
.
What I do not understand is why log()
has O(log(n))
and not at least O(n)
. From the recursive code below, log_factorial
is essentially called n
times (which translates to log()
being called n
times), so if log()
were O(n)
, log_factorial
ought to be O(n^2)
.
log_factorial <- function (n) {
# Return the log of factorial(n) for any integer n > 0
if (n <= 1)
return (0)
return ( log(n) + log_factorial( n - 1 ) )
}
Looking up the internet did not help me much (on the contrary, it got me more confused). Wikipedia says Big-O of log
should depend on the degree of precision:
...computing the natural logarithm (using the arithmetic-geometric mean) is O(M(n) ln n). Here n is the number of digits of precision at which the natural logarithm is to be evaluated and M(n) is the computational complexity of multiplying two n-digit numbers.
To get an empirical answer, I plotted runtimes vs n (below) and indeed, the log-factorial is not quadratic.
I am obviously missing something here and hope someone can explain to me. Pardon me if my question is 'noob-ish'.