-1

I have written a simple code:

int temp = 10;

long startTime = System.currentTimeMillis();

for (int i = 0; i < temp; i++) {
    for (int j = 0; j < temp; j++) {
        for (int k = 0; k < temp; k++) {
            System.out.println(i + " " + j + " " + k);
        }
    }
}

long stopTime = System.currentTimeMillis();
System.out.println("Total Time: " + (stopTime - startTime));

The total time for this program is around 50ms.

However, when I changed temp to 100, I tough that total time would estimate around 500ms (because, we increased temp 10 times 50*10=500), however on the other hand I though that because this is a cubic function (the time complexity is O(n^3)) the result shouldn't be exactly 500, it should have been more, because as the temp increases the total time would increase according to its time complexity. After the experiment, I run the program and the result was 3234ms (around 3000 always).

So, could some one explain to me how does time execution change according to it's time complexity? For example if we have 10 items the time execution is 50ms and for 100 items time execution is 3000ms, could some one explain why does this happen in more precise way?

Andronicus
  • 25,419
  • 17
  • 47
  • 88
Baron
  • 41
  • 7
  • 3
    Give a read to [What is microbenchmarking?](https://stackoverflow.com/questions/2842695/what-is-microbenchmarking) – Federico klez Culloca Sep 08 '21 at 08:32
  • 1
    Time complexity is the theoretical bound on the growth of execution time taken as the variable (e.g. `n`) tends to infinity. Execution time is the time it takes for a particular value of `n`, which is typically much smaller than infinity, and so might not bear much relation to the time "predicted" by the time complexity. – Andy Turner Sep 08 '21 at 08:40
  • 10^3=1000, so you can expect estimate time about 50 000 ms, but I suspect that measuring 50 ms is not exact (too large for such small work, perhaps due to console output system initialization or smth else) – MBo Sep 08 '21 at 08:41
  • 4
    If you run your code with some warm-up, and strip out the `System.out.println` (https://ideone.com/z6IUkk), you can see that the difference between 10 and 100 is indeed about 1000x: (9.6e-6s vs 1.1e-2s). (I wouldn't claim this is a reliable microbenchmark, though; but at least you can see roughly the expected effect). – Andy Turner Sep 08 '21 at 08:53

2 Answers2

2

You must be aware that it is forever impossible to really observe asymptotic complexities.

For small N (say below a thousand), the low order terms are not neglectable and measurements do not make sense, even in theory.

For large N (between a thousand and a billion), factors such as cache and virtual memory effects play a significant role and invalidate the "constant access time" hypothesis.

For extremely large N (above a billion), the machine capacity is exhausted.

1

In general time complexity says nothing about execution times for particular inputs and how those relate.

For the example you have given, there are these comments:

  • Conversion to string and concatenating strings, like in i + " " + j + " " + k, will take longer depending on how many individual digits are produced. In theory, one such evaluation has a time complexity of O(log(ijk))

  • System.out.println is a complex beast, and depends on several external factors. You should really exclude any printing from code that you want to benchmark.

  • If you remove printing, type casting and string concatenation, then the time complexity is indeed O(n³), but the execution time will include lower order factors: for instance, the declaration of j happens O(n) times and the declaration of k happens O(n²) times. In light of the overall time complexity that is not relevant, but in terms of execution time, it makes a difference.

trincot
  • 317,000
  • 35
  • 244
  • 286