I ran across a problem where I needed to calculate the values of very large factorials. I solved this problem in C++ in two different ways, but only want to know if my complexity analysis is accurate.
In either method I am representing very large numbers as vectors where v[0]
represents the least significant digit, and the value at the last index represents the most significant digit. Version 1's code can be found in this gist.
Given the above code, it seems multiplyVectorByInteger()
is O(log(n*k))
where n
is the given integer, and k
is the number represented by the vector. My logic is that we'll be doing some number of steps proportional to the length of the resulting number n*k
in order to produce a vector representing n*k
. The length of n*k
is O(log(n*k))
Some of the steps will be carried out in the for loop, others in the following while loop.
In this program to find large factorials, whenever we call multiplyVectorByInteger()
we will be passing in an integer n
and the vector representation of (n-1)!
. This means if we want to find 6!
, we pass in the integer 6
and the vector representation of 5!
. The function will return the vector representation of 6!
. Using the previous information I believe I can say the complexity is O(log(i!))
where i is the passed in integer. In order to find large factorials, we must call this method O(n)
times where n
is the factorial we are trying to find. Our accumulated logic will look like this:
1! = 1!
1!*2 = 2!
2!*3 = 3!
3!*4 = 4!
...
(n-1)!*n = n!
Since at each level we're calculating i!
, we're consequently performing O(log(i!))
steps at each level. The summation to show this is as follows:
My logic from jumping from the second summation to the Big-Oh notation is as follows...breaking this out we get the following:
1log(1) + 2log(2) + 3log(3) + ... + nlog(n)
It is obvious we get O(n^2)
terms of log(1) + log(2) + ... + log(n)
. Log rules remind us that log(a) + log(b) = log(ab)
, which means the log terms in this case collapse to log(n!)
. Thus we have O(n^2)log(n!)
.
This would make the overall time complexity of this program O(n^2log(n!))
. Is this analysis correct?
Naive version time complexity
To practice complexity analysis I want to take a look at what seems like a less efficient solution. Suppose we change our multiplyVectorByInteger()
function such that instead of multiplying a vector representation of k
by an integer n
in O(log(n!))
time to produce n!
, the new multiplyVectorByIntegerNaive()
function adds the vector representation of a number together a total of n
times.
multiplyVectorByIntegerNaive()
exists in this gist. It utilizes a function addVectors()
whose complexity O(n)
where n size of the larger of the two vectors.
It's clear we're still calling this new multiplication function n
times, but we need to see if the complexity has changed. For example given the integer 6
and the vector representation of 5!
we add 5! + 5! + 5! + 5! + 5! + 5!
to get 6*5! = 6!
. If the given integer to our multiplication function is i
, it is clear we do i-1
additions. We can enumerate the steps for the previous example call to our naive multiplication function.
5! + 5!
2*5! + 5!
3*5! + 5!
4*5! + 5!
5*5! + 5!
Writing out the full summation should now give:
It appears the asymptotic complexity of both methods is the same given my calculations are accurate. Is this true?