-1

I have this algorithm

int f(int n){
   int k=0;
   While(true){
      If(k == n*n) return k;
      k++;
   }
}

My friend says that it cost O(2^n). I don’t understand why.

Amessihel
  • 5,891
  • 3
  • 16
  • 40
  • https://stackoverflow.com/questions/44341669/how-is-on-algorithm-also-an-on2-algorithm – user2258152 Sep 29 '20 at 19:12
  • What have you tried already in terms of analysis for this algorithm, to try to gain an understanding of it? Because if you've not done any analysis yet, it's definitely too early to ask for help on Stackoverflow. – Mike 'Pomax' Kamermans Sep 29 '20 at 19:20
  • I’ve started studying from “introduction to algorithm...” by cormen. I understand O, omega, theta... I usually find the cost of recursive algorithm, but this one has a basic loop I think. – Manuel Andruccioli Sep 29 '20 at 19:26
  • 1
    I think it cost O(n^2) – Manuel Andruccioli Sep 29 '20 at 19:43
  • Not to disparage your friend, but: did you remember to ask them why they think that? Because unless they can explain themselves, their claim is just a claim, not actually true =) – Mike 'Pomax' Kamermans Sep 29 '20 at 19:53
  • 1
    Based on your title line, I would expect this to try calculating the square root of the argument `n`. Instead, it's a pointlessly complex way of yielding `n * n`. – pjs Sep 29 '20 at 20:00
  • 1
    So, given a parameter n, the function can only return n*n, if ever it returns... Doesn't it sound more like the square than the square root? It is effectively O(n^2), assuming no integer overflow etc... – aka.nice Sep 29 '20 at 20:03
  • 1
    yes this is very bad root ... the square root (upward rounding) would be `If (k*k >= n) return k;` instead of yours `If(k == n*n) return k;` also the komplexity is not really that easy as we do not know what int is (as you did not specify language/platform `n` range)... if it is basic type then it would be `O(sqrt(n))` for sqrt and `O(n^2)` for your sqr. However if it is bignum type then its much more complicated as the multiplication would not be `O(1)` anymore it it would also depend on its implementation ... – Spektre Sep 30 '20 at 07:20

3 Answers3

0

The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).

This is based on your source code, not on the title.

For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.

You may look at the interesting entry for sqrt on wikipedia

ibra
  • 1,164
  • 1
  • 11
  • 26
  • there are faster methods (even non approximate ones) ... as there is no need for multiplication to find integer sqrt for example this one [sqrt without multiplication](https://stackoverflow.com/a/34657972/2521214) I developed some time ago and even that one is far from fastest ones ... there are even well known `O(1)` solutions for floating points (IIRC from Quake3) – Spektre Sep 30 '20 at 07:46
  • @Spektre, the sqrt without multiplication lead to the same or worse complexity, in the link you provide the algorithm use Binary search tree. Beside that, check the [wikipedia on sqrt](https://en.wikipedia.org/wiki/Square_root), it said **the complexity of sqrt with n digits of precision is equivalent to that of multiplying two n-digit numbers"**. The Method use in C, C++, python ...etc is called [the shifting nth root algorithm](https://en.wikipedia.org/wiki/Shifting_nth_root_algorithm) , teh stackoverflow link refer to this method i think. its complexity is O(k^2 n^2 \log(B)). – ibra Sep 30 '20 at 12:51
  • @Spektre , as far as i know, there is no O(1) well known solution. if there is please give us the link. The one you refer to (IIRC Quake3) or the game Quake III Arena, is doing **Fast inverse square root** mean 1/sqrt(n). **It is an approximation of the inverse square root of the input**. This approximation result can be refined by using a root-finding method, a method that finds the zero of a function, the algorithm uses Newton's method. For more detail look at [Fast inverse square root](https://en.wikipedia.org/wiki/Fast_inverse_square_root) – ibra Sep 30 '20 at 13:06
-2

In my opinion the time cost is O(n^2). This function will return the k=n^2 value after n^2 while's iterations.

  • 4
    opinions are not allowed when it comes to complexity: either you can _prove_ that it's O(N^2), in which case you should show that proof, or you're just guessing, and guesses are not answers. – Mike 'Pomax' Kamermans Sep 29 '20 at 19:53
-2

I'm Manuel's friend, what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.

So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.

Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :

O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)

calculation with excel


"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)

here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial

g1lf0yl3
  • 7
  • 2
  • 3
    There is no `x` in the algorithm, big-O is calculated/derived relative to the input(s) and the input here is `n`. – pjs Sep 29 '20 at 20:27
  • n is the value of the input, not the length – g1lf0yl3 Sep 29 '20 at 20:29
  • 2
    On what basis do you believe that big-O revolves around the length of the input rather than the value? It's determined by how many times you iterate through the loop, which is proportional to the value of `n`, not its length. – pjs Sep 29 '20 at 20:33
  • It is well known that the cost of an algorithm is based on the length of its input... Look time complexity on wikipedia : https://en.wikipedia.org/wiki/Time_complexity It say : "the time complexity is generally expressed as a function of the size of the input." – g1lf0yl3 Sep 29 '20 at 20:39
  • Size != length. For numeric values size == magnitude. But if you doubt me, step through it by hand. Put in some value for `n` such as 5 and count how many times you go through the loop. Then put in 10 - my prediction is you'll go through the loop four times as often as for 5. – pjs Sep 29 '20 at 20:40
  • but the size of n is the amount of bit that we use to rapresent it, and they are log(n) – g1lf0yl3 Sep 29 '20 at 20:44
  • 1
    Guess I've been teaching it wrong for 35+ years then. Heh! Go figure! – pjs Sep 29 '20 at 20:45
  • and what about the explanation of the primality test? – g1lf0yl3 Sep 29 '20 at 20:46
  • 1
    @g1lf0yl3 I'm guessing your teacher and Manuel are talking about different pieces of code. – Bart Kiers Sep 29 '20 at 20:47
  • 1
    I suspect you misheard or misinterpreted your prof's explanation. I strongly suggest that you perform the experiment I suggested. The crucial concept here is how the problem scales relative to the input. If doubling the input value `n` takes 4 times as many iterations, then the complexity is quadratic. – pjs Sep 29 '20 at 20:49
  • as far as I remember the lessons of algorithms and data structures he had told us about this case (not this particular case, but similar) and the fact that the cost was exponential (as it is in the case of the primality test) – g1lf0yl3 Sep 29 '20 at 20:51
  • 1
    And what is "this case"? Does it resemble the pseudo code posted by Manuel? – Bart Kiers Sep 29 '20 at 20:51
  • He told us this: If we have an F(n) which takes O(n^2) its real cost will be exponential since the input's size is log(n) – g1lf0yl3 Sep 29 '20 at 20:54
  • If he said that as a universal rule, he's wrong. If he said it in a particular context, as @BartKiers has mentioned, you may have overlooked or not grasped the significance of that context. Again, I urge you to do the calculations by hand for this problem and see for yourself how the problem scales as a function of the input value `n`. – pjs Sep 29 '20 at 21:00
  • and what about this? https://cs.stackexchange.com/questions/92624/time-complexity-of-basic-primality-test-algorithm – g1lf0yl3 Sep 29 '20 at 21:07
  • @pjs as you ask to me i did calcs and i found that time complexity is perfectly equal to 2^(2x) where x in the size of the input n. I linked the image with calc in my answer. – g1lf0yl3 Sep 29 '20 at 21:32
  • @pjs maybe we don't understand each other, what I'm trying to say is that yes the function is quadratic as a function of n (if we consider its value), but if we consider the size of the input (as the definition of time complexity says) then the cost becomes exponential – g1lf0yl3 Sep 29 '20 at 21:50
  • @g1lf0yl3 The time of execution of inc command does not depend on operand size. So adding 1 to number that takes 2 bits takes the same time as to number that is up to 32 bits wide. So n^2 steps in the loop (actually it can be n * (n - 1)) has o(n^2) time complexity. Your teacher's rule is correct for computers with 1 bit word. – Yuri Ginsburg Sep 30 '20 at 02:31
  • I'm not talking about time complexity of inc operation, I considered it obviously as constant time. – g1lf0yl3 Sep 30 '20 at 06:10
  • this clearly does not correspond to the OP code and if it does than its wrong so -1 – Spektre Sep 30 '20 at 07:51
  • 1
    One last try, then I must get back to real work. Your premise is that the algorithm's complexity is driven by the number of bits in the representation of `n`. For positive integers `k`, all values in the range `[2^k, 2^(k+1) - 1]` have the same number of bits. Thus, if your premise is true all inputs in the range [32768, 65535] should have the same run time when evaluated using Manuel's algorithm. This is not the case (65535 takes just shy of 4 times as long to evaluate as 32768). Q.E.D. your premise is incorrect. – pjs Sep 30 '20 at 15:57
  • I understand what are you trying to explain but as far as I remember time complexity is an approximation and not the exact function to calc how much time an algorithm will take. So yes my exponential cost doesn't always give the exact cost but as you go to infinity you can see that the algorithm is exponential as a function of the size of the input. But this is obv true because he is quadratic as a function of n and there is a constant correlation between n and is size, where n is 2^(size). – g1lf0yl3 Sep 30 '20 at 22:58
  • @g1lf0yl3 In other words you can throw constants out of complexities but you did throw function of `n` instead which is incorrect and leads to very wrong result... – Spektre Oct 07 '20 at 06:37