6

The space complexity of Quicksort is listed as O(logn).

however - Quicksort can process without use of any additional memory: at each iteration, during the partition process, the entries are swapped eventually to be in the left and right partitions based on the pivot used. this recursive swapping-and-thus-forming-the-partitions can be done on the list itself without having the partitions at one level of recursive calls interfering and without quicksorting in different levels of calls interfering.

What's the use for extra memory in Quicksort?

TIA.

//============================

EDIT:

agree with stack space in the comments/answers-- had missed that.

still,

Quicksort performs in O(nlogn) time in the expected case-- by forming (nearly) equal-size partitions at each level of recursion.

the stack-space used is a binary tree, full tree in the optimum case, with height log n. Each node on this tree is a recursive call, and the stack-space in this case is on the order of n-- not log n. there are O(n) nodes on this tree. the recursive calls to the left and right partitions are being done simultaneously-- the call tree is full at a certain point of execution.

so- the average case space complexity should be O(n)-- not O(logn) (?)

it also contradicts with the space complexity of merge-sort. merge-sort space complexity is listed as O(n) and processes similarly.

Roam
  • 4,831
  • 9
  • 43
  • 72
  • It's the space used to make the recursive calls. When you make the recursive calls, you need to be able to store the stack frame and the returned addresses of each call. – Ryan J Apr 20 '15 at 01:57
  • You ask the use of extra memory, while the desciption states that the algorithm can process without it? What is your question exactly? – dmaij Apr 20 '15 at 01:57

3 Answers3

2

Quicksort normally uses O(log n) extra memory, stored on the stack. It's not O(n), because the binary tree is never explicit in memory, we just do a post-order traversal of it (that is: only one path in the tree is ever stored at a given time).

Mergesort is listed as O(n) because we usually copy the result of the merge into a new array. In-place sorting is possible, but increases the time complexity to O(n log2 n), according to Wikipedia. It would still use O(log n) for the recursion.

Jordi Vermeulen
  • 1,168
  • 1
  • 10
  • 17
1

Stack space for the recursion. i.e. you don't need to store any additional data, but you need to store the return addresses (EDIT: and other associated information as applicable to the language in question, thanks @dmaij for the reminder) for each recursive subcall. Since Quicksort generates a stack of depth log(n), you will need as many stack frames simultaneously on the call stack.

Amadan
  • 191,408
  • 23
  • 240
  • 301
1

Quick sort with random pivots has a space complexity

Average case = O(logN)
Worst case = O(N)

The worst case arises when the sort is not balanced. However it can be avoided by sorting the smaller sub-arrays first and then tail-recursing on the larger array. This improved version will have a worst case space complexity of O(logN). Note that the tail recursion is internally implemented by most compilers and no extra code needs to be written for that.

Merge sort has an average/worst case space complexity of O(N) because the merge subroutine copies the array to a new array which requires O(N) extra space.

Edit: As suggested in the comments, reliance on the tail recursive optimization is not good. Incorporating it in the code instead would be a better idea.

Community
  • 1
  • 1
Nikunj Banka
  • 11,117
  • 16
  • 74
  • 112
  • Indeed they do but be careful with tail recursion optimizations in imperative languages. In C/C++ especially any number of small subtle changes tend to prevent these optimizations, debug builds being the most common case. Consequently I would recommend not _relying_ on this optimization outside of languages designed with it in mind, at least in cases where the iterative rewrite is straightforward. – doynax Apr 20 '15 at 11:51