Before this gets accused of being a duplicate, I have looked everywhere on StackOverflow for this answer and have not been able to find something that can explain this to me, so please read the entirety first.
Suppose you need to write a function that takes an integer n and returns the sum of the positive integers from 1..n (I will use C).
int sum_of_integers(int n) {
int i, counter = 0;
for(i = 1; i <= n; i++)
counter += i;
return counter;
}
Obviously, this algorithm is in O(n) time, since the number of instructions it runs is proportional to the input size n.
However, consider this implementation, using the mathematical truth that 1+...+n = (n)(n+1)/2.
int sum_of_integers(int n) {
//Trying to avoid potential int overflow
//And take into account int division
if(n % 2 == 0)
return (n/2)*(n+1);
return ((n+1)/2)*n;
}
My Question: Since multiplication is technically in big-O > O(n), is the first implementation preferred? Is the second implementation considered to be in O(1) or not?
To me, because it does not matter what the size of n is for the second implementation since the same operations are being performed the same amount of times, I feel like it should be in O(1). On the other hand, the second implementation may actually be running more instructions based on the implementation of the multiplication operator.