0
// -- Algorithm A
    int a = 1, b = 2;
    for (int n = 0; n < 100; n++)
    {
        int c = a + b + n;
        int d = a - b - n;
    }

// -- Algorithm B
    int a = 1, b = 2;
    for (int n = 0; n < 100; n++)
    {
        int c = a + b + n;
    }

    for (int n = 0; n < 100; n++)
    {
        int d = a - b - n;
    }

Should I try to use existing loops to make necessary operations? Or in the end the result is the same?

Pedro Batista
  • 1,100
  • 1
  • 13
  • 25
  • 1
    The execution time's difference must be miniscule (if any) and depends on the compiler and optimizator – Dmitry Bychenko Jan 27 '17 at 11:50
  • 1
    In theory or in practice? A clever compiler can spot that neither algorithm does anything observable, and will compile them as a no-op. – biziclop Jan 27 '17 at 11:56

4 Answers4

1

In O(n) notation they will be the same. According to this:

you will firs have a Sum:

O(n) + O(n) = O(2n)

And then Multiplication by constant:

O(2n) = O(n)

so in the end it will be O(n)

user987339
  • 10,519
  • 8
  • 40
  • 45
  • But what I can't understand is: Won't the n++ instruction be executed twice the times in algorithm B? How can the complexity be the same? – Pedro Batista Jan 27 '17 at 12:10
  • 1
    O(n) is approximation. It describes time complexity by functions. In this case both implementations are linear functions and they are treated as a combination of linear functions so at the end they are both O(n). O(n) is not used to calculate how long the function will take, but to estimate the range. – user987339 Jan 27 '17 at 12:32
1

Complexity-wise, both algorithms are O(n). Even if you consider multiplicative constants, you could say that one is n * 2 and the other one n + n, which is exactly the same.

In reality, though, it depends. One could argue that, since the second one performs twice as many branches, the performance will probably be worse (see this famous question), but ultimately it depends on the compiler, the particular input, the OS, etc.

jdehesa
  • 58,456
  • 7
  • 77
  • 121
  • @PedroBatista Each of the two operations (`a + b + n` and `a - b - n`) will be executed a hundred times, but the operations related to the loop will be, in principle, run twice as many times in the second version than in the first. This basically includes the increment and comparison of the counter (which could be deemed insignificant) and the branch instruction to jump back to the beginning of the loop (which might not be as insignificant). – jdehesa Jan 27 '17 at 12:12
  • The complexity, though, is the same because it does not account for multiplicative constants (which is why it is misleading many times). So even if one took 10 times longer than the other, as long as it is always a factor of 10 (and not *m*, or the size of *m*, where *m* is some input), the complexity is the same. – jdehesa Jan 27 '17 at 12:15
  • Ok, I get it. So the n++ and the comparison will occur twice the times (unless a smart compiler realizes this and optimizes my dumb code) – Pedro Batista Jan 27 '17 at 12:25
1

In your current implementation

  int a = 1, b = 2;

  for (int n = 0; n < 100; n++)
  {
      int c = a + b + n;
      int d = a - b - n;
  }

you're doing nothing: both c and d are local vairables, which exist within for loop scope only; if optimizer is smart enough to find out that there's no possibility of integer overflow (both 1 + 2 + 100 and 1 - 2 - 100 are within [int.MinValue..int.MaxValue]) it can well eliminate the entire loop(s) with warning to developer.

Real world example is

 for (int n = 0; n < N; n++)
 { 
   f(n);
   g(n);
 }

Versus

 for (int n = 0; n < N; n++)
   f(n); 

 for (int n = 0; n < N; n++)
   g(n); 

where both f(n) and g(n) don't have side effects and N is large enough. So far so good, in the 1st case the execution time is

 T = f(0) + g(0) + 
     f(1) + g(1) + 
     ...  
     f(N - 2) + g(N - 2) +
     f(N - 1) + g(N - 1) 

In the 2nd case

T = f(0) + f(1) + ... f(N - 2) + f(N - 1) +
    g(0) + g(1) + ... g(N - 2) + g(N - 1)  

As you can see, the execution times are the same (not only O(...)). In real life, it can be miniscule difference between two implementations: loop initialization and implementation details, CPU register utilizations etc.

Dmitry Bychenko
  • 180,369
  • 20
  • 160
  • 215
  • It all makes sense, I just do not understand why the n++ instruction (which happens 2*N times in the second algorithm) and the n < N comparison are not used to calculate time complexity. – Pedro Batista Jan 27 '17 at 12:31
  • 1
    @Pedro Batista: usually, it's `f` and `g` that are determining the execution time; `n++` - one `INC` assembly command (providing that `n` has been in CPU register), `n < N`: is something like `XOR JNZ`; all these commands are about 1/3-3 ticks (nanoseconds) – Dmitry Bychenko Jan 27 '17 at 12:36
0

Definitely the first algo will be faster, but since the complexity is only increasing linearly, the second one is not bad. As far as you don't go quadratic both are good,

But if you end up writing n such loops then you have n^2 complexity which is bad

Mitesh Pant
  • 524
  • 2
  • 15