2

So I have a problem that is pretty simple. I have a list of numbers 1-N (iterating by 1) and one number in the list is missing. I'm trying to determine which of my two solutions is faster in terms of Big-Oh.

First algorithm: Sort the list, then iterate over the list one number at a time until you find a number that is more than 1 number larger than the last number. The number that is missing has been found at that point.

Second algorithm: I keep the list as is. I make another array that is all the numbers 1-N already sorted. I iterate over each number in this new list and compare it to the numbers in the original list. Once I find the number in the original list, I remove that number from the list (and decrement the size of that list while doing so, let's say this is a dynamic data structure). I do this for every number. If a number goes through the entirety of the list and the list is the same size, that is the missing number.

I would think that algorithm 2 would be faster, just because it's decrementing the size of the list and only doing at most 1 full pass and at best it should be O(1) (if you hit the number your searching for right off the bat).

The first algorithm would be O(nlogn) since you're doing a sort and then iterating over the list again.

Or maybe they are both O(nlogn)? This has been bugging me since last night since this seems like it'd be pretty simple.

nobody
  • 7,803
  • 11
  • 56
  • 91
  • 3
    Do try to make a habit of looking at the related questions before asking your question - [Easy interview question got harder: given numbers 1..100, find the missing number(s)](http://stackoverflow.com/q/3492302) – Bernhard Barker May 13 '14 at 15:48
  • While this is a good post it is not related to the particular answer I was looking for. I didn't want to find the 'correct' answer to the problem, I was looking for the time complexity of these two particular algorithms in relation to the problem. – nobody May 13 '14 at 18:40
  • You should **explicitly** point out in the question that you're not looking for an better solution. I think the +8 on the answer pointing out a better solution is a pretty clear indicator that this is necessary. – Bernhard Barker May 13 '14 at 18:50
  • First line. "I'm trying to determine which of my two solutions is faster in terms of Big-Oh." – nobody May 13 '14 at 21:00

4 Answers4

10

Calculate the sum of them and then from the Sum that it should be ( n(n+1)/2 ) take the calculated sum

ShouldBeSum = N * (N+1) / 2;
MissingNumber = ShouldbeSum - CalculatedSum
Alexandru Chichinete
  • 1,166
  • 12
  • 23
1

Just complementing kikyalex's answer, the second algorithm would be even slower, because for each number in the sorted list, you have to find it in the original list, performing an O(n) search each time. So the second algorithm is O(n^2).

Juan Lopes
  • 10,143
  • 2
  • 25
  • 44
0

First is O(n*logn) + O(n) i.e. O(n*logn) while second is O(n^2) as on average searching requires O(n) and is required for n elements hence O(n*n).

nikhil_vyas
  • 513
  • 5
  • 16
0

Second one as described(changing the length of the second list) is O(N^2)

However, it can be made into O(N) with slight modification (though also O(N) memory) First find the min of the list O(N) then initialize an all zero boolean array the size of the list O(N) all false go over the first list, O(N) * each time changing the array[number-minNumber] to true O(1) Check the array for a false O(N) So total O(N), and works with duplicates

Though if you are promised no duplicates, then Alexandru Chichinete answer is faster (and takes much less memory) once modified to to a min that isn't 1 which is pretty easy (minimum number 1 was never mentioned in original problem)