O(n) complexity means merge sort in worst case takes a memory space equal to the number of elements present in the initial array. But hasn't it created new arrays while making the recursive calls? How that space is not counted?
-
Why do you think it doesn't create new arrays? It needs to copy elements out to new arrays for each merge step – OneCricketeer Jan 12 '16 at 13:33
-
2Please see this, http://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity – zangw Jan 12 '16 at 13:35
-
Do you mean overhead memory while creating new arrays? – Ferit Jan 12 '16 at 15:58
-
2Even 5n + 999 is O(n). – greybeard Jan 12 '16 at 18:37
4 Answers
A worst case implementation of top down merge sort could take more space than the original array, if it allocates both halves of the array in mergesort() before making the recursive calls to itself.
A more efficient top down merge sort uses an entry function that does a one time allocation of a temp buffer, passing the temp buffer's address as a parameter to one of a pair of mutually recursive functions that generate indices and merge data between the two arrays.
In the case of a bottom up merge sort, a temp array 1/2 the size of the original array could be used, merging both halves of the array, ending up with the first half of data in the temp array, and the second half in the original array, then doing a final merge back into the original array.
However the space complexity is O(n) in either case, since constants like 2 or 1/2 are ignored for big O.

- 27,407
- 3
- 36
- 61
MergeSort has enough with a single buffer of the same size as the original array.
In the usual version, you perform a merge from the array to the extra buffer and copy back to the array.
In an advanced version, you perform the merges from the array to the extra buffer and conversely, alternately.
Note: This answer is wrong, as was pointed out to me in the comments. I leave it here as I believe it is helpful to most people who wants to understand these things, but remember that this algorithm is actually called in-place mergesort and can have a different runtime complexity than pure mergesort.
Merge sort is easy to implement to use the same array for everything, without creating new arrays. Just send the bounds in each recursive call. So something like this (in pseudocode):
mergesort(array) ->
mergesort'(array, 0, length of array - 1)
mergesort'(array, start, end) ->
mergesort'(array, start, end/2)
mergesort'(array, end/2+1, end)
merge(array, start, end/2, end/2+1, end)
merge(array, start1, end1, start2, end2) ->
// This function merges the two partitions
// by just moving elements inside array

- 90,431
- 16
- 141
- 175
-
1What you're describing is the *in-place* merge sort, and it's **not** easy to implement. http://stackoverflow.com/questions/2571049/how-to-sort-in-place-using-the-merge-sort-algorithm – Karoly Horvath Jan 12 '16 at 13:42
-
1You are indeed correct. My mistake! I'll leave this as an example of how not thinking something through entirely can lead to the wrong conclusion. – Emil Vikström Jan 12 '16 at 13:47
In Merge Sort, space complexity is always omega(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter. edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.
That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.

- 7,511
- 1
- 17
- 15