2

Both of these algorithms are giving same output but the first one takes nearly double time (>.67) compared to second one (.36). How is this possible? Can you tell me the time complexity of both algorithms? If they're the same, why is the time different?

1st algorithm:

 for (int i =0 ;i<n;i++){
        cin>>p[i];
        if(i>0){
            if(p[i-1]>p[i]){
                cout<<p[i]<<" ";
            }
            else{
                cout<<"-1"<<" ";
            }
        }
    }

2nd algorithm:

for (int i =0 ;i<n;i++){
        cin>>p[i];

    }
    for (int i =0 ; i<n-1;i++){
       if(p[i]>p[i+1]){
                cout<<p[i]<<" ";
            }
            else{
                cout<<"-1"<<" ";
            } 
    }
ggorlen
  • 44,755
  • 7
  • 76
  • 106
  • 2
    The first one can take *infinite* amount of time, since you have input in the loop, which you don't have in the second. – Some programmer dude Jan 19 '19 at 04:24
  • 2
    Welcome to the StackOverflow. Could you please improve your question by 1. Fixing the code. 2. Providing the test data, the way you measure time, and whatever else is required to make it a [mcve] – SergGr Jan 19 '19 at 04:25
  • 6
    These two algorithms do different things, and they take different amounts of time. What else is possible? – L. F. Jan 19 '19 at 04:26
  • 2
    First level of optimizing your code: Make a print statement in each iteration of the versions of the loops and then count them. Ponder! – Ted Lyngmo Jan 19 '19 at 04:35
  • You can always dump the assembly and compare the instructions required between the two. – David C. Rankin Jan 19 '19 at 04:42
  • The code for 1st and 2nd algorithm have a different output, but that not the main point. Please provide a minimal example. How did you measure the time? I had the same result (running time) for both algorithms. There was no difference. – Julo Jan 19 '19 at 05:00

1 Answers1

1

Time complexity in a modern processor can be an almost-useless performance statistic.

In this case we have one algorithm that goes from 0 to n-1--O(N)--and a second that goes 0 to n-1 twice--the constant drops out so it's still O(N). The first algorithm has an extra if statement that will be false exactly once and a decent compiler will obliterate that. We wind up with the same amount of input, the same amount of output, the same amount of array accesses (sort of) and the same amount of if (a>b).

What the second has that the first doesn't is determinism. One loop determines everything for the second. All of the input is read in in the first loop. That means The CPU can see exactly what is going to happen ahead of the time because it has all of the numbers and thus knows exactly how every branch of the if will go and can predict with 100% accuracy, load up the caches, and fill up pipelines to everything is ready ahead of time without missing a beat.

Algorithm 1 can't do that because the next input is not known until the next iteration of the loop. Unless the input pattern is predictable, it's going to guess which way if(p[i-1]>p[i]) is going wrong a lot of the time.

Additional reading: Why is it faster to process a sorted array than an unsorted array?

user4581301
  • 33,082
  • 7
  • 33
  • 54