2

Had the author missed to compute I/O calls?

The following function prints the powers of 2 from 1 through n (inclusive). For example, if n is 4, it would print 1,2, and 4. What is its runtime?

int powersOf2(int n) {
  if (n < 1) {
    return 0;
  } else if (n == 1) {
    System.out.println(1);
    return 1; 
  } else {
    int prev = powersOf2(n / 2); 
    int curr =prev * 2; 
    System.out.println(curr); 
    return curr;
  }
}

the runtime is O(log n)

According to the Example 12 (string permutations) the length of argument of System.out.println() call makes sense:

Executing line 7 takes O(n) time since each character needs to be printed

From I/O perspective we need to print powers of 2 from 0 to K, where K is [log(N)], the amount of characters to print for 2X is [1 + X/log(10)], so the total amount of characters to print is [K + 1 + K*(K+1)/2log(10)] and the runtime is O(log2N) but not O(log N)

PS.

The Example 15 - printing memoized Fibonacci numbers seems to have the same drawback:

void allFib(int n) {
  int[] memo = new int[n + 1]; 
  for (int i = 0; i < n; i++) {
    System.out.println(i + ": " + fib(i, memo));
  }
}

int fib(int n, int[] memo) {
  if (n <= 0) return 0;
  else if (n == 1) return 1;
  else if (memo[n] > 0) return memo[n];
  memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
  return memo[n];
}

We're doing a constant amount of work N times, so this is O(n) time.

The amount of characters to print for the sequence of first N Fibonacci numbers ~ N2, so the runtime should be O(N2).

Andrey B. Panfilov
  • 4,324
  • 2
  • 12
  • 18
  • I think you want to ask that type of question in Stack Exchange's Software Engineering or Computer Science sites. Hopefully you won't have your question closed for this reason like it has happened to me in the past. Perhaps this answer will help https://stackoverflow.com/questions/13467674/determining-complexity-for-recursive-functions-big-o-notation – hfontanez Dec 01 '21 at 14:02
  • The BigO notation is a measure of the number of iterations of computation required for each N in the problem space. For example, doing a search of a list of size N has a worst case number of computations of O(N) because the item may not be in the list, and the list of of size N. This would be true whether you're doing a single like by like comparison of each item, or a complex regex to check each item. It's the relationship between the problem space and the efficiency of the algorithm. Conversely a search in an ordered list where you exclude half the list each time, might be log(N). – Ashley Frieze Dec 01 '21 at 14:14
  • The point is that this is not a measure of CPU cycles. It's a measure of the algorithmic complexity. Whether you're outputting 1 or 10 characters, that's just a single _step_ in the algorithm. – Ashley Frieze Dec 01 '21 at 14:15
  • Println is log n as it is the _number_ of digits printed. – Thorbjørn Ravn Andersen Dec 01 '21 at 14:18
  • @AshleyFrieze I have especially referred to Example 12 where the author states that cost of printing N characters is O(N) (as well as concatenating strings, etc) and takes this cost into account when computing algorithm complexity. – Andrey B. Panfilov Dec 01 '21 at 14:39
  • It's possible that... (a) You've misinterpreted what the author has said. (b) The author is defining their own concept of Big O notation. (c) My definition of Big O notation is incorrect. I'm going with (a). I recommend further reading on Big O. – Ashley Frieze Dec 01 '21 at 15:48
  • Well, it seems you have no that book near at hand... [here is a discussion of Example 12](https://stackoverflow.com/questions/68633201/example-12-all-permutations-of-a-string-from-big-o-notation-why-line-7-and-line), so for that particular example author takes into account not only calls related to "problem space" but also "system" calls, and my question is actually whether from this perspective there is a mistake in computations for two other examples which also involve I/O – Andrey B. Panfilov Dec 01 '21 at 16:49

1 Answers1

1

You are correct. The total amount of characters being printed out is Θ(log2 n), and so the total amount of work done by the code works out to Θ(log2 n) rather than O(log n), assuming we count the cost of printing each character. It's somewhat common when doing back-of-the-envelope big-O calculations to leave out the cost of printing out the result, though in Theoryland you have to factor in those costs.

That being said, one could make a reasonable argument that any int value takes up a fixed amount of characters when printed out on a computer (on a 32-bit machine printing a 32-bit integer requires at most 11 characters, on a 64-bit machine printing a 64-bit integer requires at most 21 characters, etc.). You could therefore argue that the cost of printing any integer is O(1) for any fixed machine, even though in reality there's some auxiliary parameter w representing the size of a machine word and the cost of printing a w-bit integer is O(w).

For an introductory exercise in learning to reason about big-O, I think it's pedagogically fine to gloss over this detail. But if I were teaching a more advanced theory course, we'd definitely need to factor in these costs.

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
  • > one could make a reasonable argument that any int value takes up a fixed amount of characters when printed out on a computer. Is there any rule of thumb we may apply to substitute O(F(N)) by O(1)? For example: in Java the length of String never exceeds 2^31-1, so we may say that the cost of printing String is O(1), on the other hand in Python integers are unbounded and the cost of printing integer in Python is O(log(N)) – Andrey B. Panfilov Dec 01 '21 at 18:58