Why might quick sort be better than merge sort ?
-
1@ qrdl: There are a lot more properties to sorting algorithms than speed. – Georg Schölly Mar 25 '09 at 07:50
-
On a processor with 16 registers, like a PC in 64 bit mode, a 4 way merge sort can be as fast or a bit faster than a standard quick sort for cases like sorting an array of pseudo random integers. A 4 way merge sort does the same total number of operations as 2 way, but it.s 1.5 x compares, 0.5 x moves, and the compares are a bit more cache friendly than the moves. To be fair, since using a 4 way merge sort is an optimization, then a dual pivot quick sort should be a bit faster. Most computers have giga-bytes of ram, so merge sort space overhead is usually not an issue. – rcgldr Apr 22 '16 at 02:50
11 Answers
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data, it is possible to make design choices which minimize the probability of requiring quadratic time.
Note that the very low memory requirement is a big plus as well.

- 124,188
- 49
- 220
- 267

- 16,798
- 8
- 46
- 66
-
3
-
28If the quicksort is initialized up front with a random 64-bit number *N*, and the pivot for every section is at index *N* mod *SectionSize*, then the probability of the algorithm demonstrating any complexity C where C is worse than O(n log n) exponentially decreases as the input size grows. – Sam Harwell Oct 13 '09 at 02:35
-
339% more compares in Quick sort than Merge sort but number of swaps in Quick sort are lesser than Merge sort. Swap operation is costlier on CPU than compare operation, as mentioned [here](http://stackoverflow.com/questions/27887198/which-is-more-expensive-operation-swap-or-comparison-in-integer-array-in-java). Testing on home laptop Quick sort takes `0.6 sec` to sort millon items unlike merge sort taking `1 sec` where as insertion sort takes `2.8 hours`. – overexchange Dec 31 '16 at 05:38
Quick sort is typically faster than merge sort when the data is stored in memory. However, when the data set is huge and is stored on external devices such as a hard drive, merge sort is the clear winner in terms of speed. It minimizes the expensive reads of the external drive and also lends itself well to parallel computing.

- 909
- 6
- 2
-
Quick sort lends itself to parallel computing basically as well as merge sort. Quick sort is top down where as merge sort is bottom up, which I was about to say is irrelevant to parallelization, but now that I think about it I actually think Quick sort is a little better for parallelization. I'm thinking the top down approach is better for work stealing in the fork join framework which comes into play on hyper modern CPUs like Intel Alder Lake and the latest smartphone chips which have asymmetric cores. – M.J. Rayburn May 29 '22 at 23:59
-
I'm also not sure merge sort is better for cache locality either. For one thing disk to RAM is the same thing in principle as RAM to cache so you would expect to see the same phenomenon in both places. Furthermore again the fundamental difference between quick sort and merge sort is that one is top down where as the other is bottom up which makes no difference in terms of overall locality of reference or even being able to fit all of something somewhere. Merge sort can be broken up into small pieces at the beginning where as quick sort can do it at the end. Why does the timing of that matter? – M.J. Rayburn May 30 '22 at 00:08
-
As I understand it Quick Sort is faster because **it** has (slightly) better locality of reference by virtue of the fact that it can be executed within a single array where as an efficient implementation of Merge Sort will alternate back and forth between two arrays. This of course also saves memory and a memory allocation/de-allocation. – M.J. Rayburn May 30 '22 at 00:15
-
Okay upon further thought I retract the portion of my statement on parallelization in which I said that Quick Sort is a little computationally superior to Merge Sort in the context of work stealing. You can implement the fork join framework in an equally effective manner for both. However I am confident that employing fork join on Quick Sort is a little *simpler*. – M.J. Rayburn May 30 '22 at 01:28
For Merge sort worst case is O(n*log(n))
, for Quick sort: O(n
2)
. For other cases (avg, best) both have O(n*log(n))
. However Quick sort is space constant where Merge sort depends on the structure you're sorting.
See this comparison.
You can also see it visually.

- 124,188
- 49
- 220
- 267

- 68,043
- 8
- 59
- 60
-
4That is wrong. An acceptable implementation of quicksort sorts smallest subrange first. – Stephan Eggermont Dec 13 '09 at 21:59
-
Very good comparison article indeed. Both explanation and simple implementations. – extraneon Apr 08 '10 at 08:05
-
1@Stephan Eggermont: You're right -- quicksort uses at least O(log n) and at most O(n) extra *space*, since every split (recursion) requires constant space to store the pivot location and there must be at least log2(n) of them. I must have got confused and put the extra factor of n in by mistake. – j_random_hacker Oct 30 '10 at 07:00
-
Merge sort can be implemented with two arrays of size n (regardless of the contents of the array you're sorting) with the algorithm switching back and forth between the two at each level of the recursion. – M.J. Rayburn May 30 '22 at 01:37
While quicksort is often a better choice than merge sort, there are definitely times when merge sort is thereotically a better choice. The most obvious time is when it's extremely important that your algorithm run faster than O(n^2). Quicksort is usually faster than this, but given the theoretical worst possible input, it could run in O(n^2), which is worse than the worst possible merge sort.
Quicksort is also more complicated than mergesort, especially if you want to write a really solid implementation, and so if you're aiming for simplicity and maintainability, merge sort becomes a promising alternative with very little performance loss.

- 37,021
- 23
- 116
- 145
-
7Where mergesort shines is when you need either a stable sort (elements that compare equal are not rearranged) or when you are using sequential (rather than random-access) "memory" (e.g. you can mergesort efficiently using tape drives). – j_random_hacker Mar 26 '09 at 08:00
-
6@j_random_hacker: your tape drive example is a bit obscure; more common examples might be sorting data as it is received from a network connection, or sorting data structures which don't allow efficient random access like linked lists – Christoph Oct 29 '10 at 07:46
-
-
An efficient version of Quick Sort is actually a little bit simpler than an efficient version of Merge Sort because *efficient* versions of Merge Sort alternate back and forth between two arrays of size n to avoid excessive memory allocations and de-allocations which are expensive. So they have to keep track of indices just like Quick Sort and on top of that they have to have logic for alternating between the two arrays. Perhaps you're referring to something like Intro Sort which switches from Quick Sort to Heap Sort if the recursion depth on Quick Sort exceeds log(n). – M.J. Rayburn May 30 '22 at 01:49
I personally wanted to test the difference between Quick sort and merge sort myself and saw the running times for a sample of 1,000,000 elements.
Quick sort was able to do it in 156 milliseconds whereas Merge sort did the same in 247 milliseconds
The Quick sort data, however, was random and quick sort performs well if the data is random where as its not the case with merge sort i.e. merge sort performs the same, irrespective of whether data is sorted or not. But merge sort requires one full extra space and quick sort does not as its an in-place sort
I have written comprehensive working program for them will illustrative pictures too.
-
Weird. I also made a Java program to sort a million elements for both quicksort and mergesort. My quicksort executed around the same time as yours but my mergesort executes in like 12 minutes.. any reason why? can you look at my code? – compski Nov 03 '13 at 14:30
-
-
-
2@compski: i see a problem in line 14 in your code. instead of using ONE extra array space, you are creating innumerous arrays which will kill the memory. here is the example which uses only ONE extra array (see line 24). http://tech.bragboy.com/2010/01/merge-sort-in-java.html let me know this answered your problem. if not i can help you more. – bragboy Oct 25 '16 at 11:21
-
Quicksort is in place. You just need to swap positions of data during the Partitioning function. Mergesort requires a lot more data copying. You need another temporary storage (typically the same size as your original data array) for the Merge function.

- 99
- 1
- 3
In addition to the others: Merge sort is very efficient for immutable datastructures like linked lists and is therefore a good choice for (purely) functional programming languages.
A poorly implemented quicksort can be a security risk.
-
2the issue with linked lists is not mutability, but lack of efficient random access, ie you're stuck with on-line algorithms (merge sort, insertion sort) – Christoph Oct 29 '10 at 07:41
-
@Christoph: Quicksort doesn't require random access; linked lists can be quicksorted efficiently. When you look at how quicksort works, it effectively grows several "lists" from defined starting points within an array. (Heapsort OTOH does require O(1) random access.) – j_random_hacker Jul 15 '12 at 04:36
-
@j_random_hacker: without random access, Quicksort suffers from poor choice of pivot – Christoph Jul 15 '12 at 08:51
-
@Christoph: That crossed my mind too, but I think the only pivot-choosing strategy it would affect would be the (admittedly common) median-of-3 variant in which the 3 are taken from the beginning, middle and end. Even doing an O(n) scan to find these 3 elements won't alter the O(n) time complexity of the partitioning step: it should cause an overall slowdown of <2x. And even this could be removed by having the "parent" recursive call determine these 3 values for each partition as it builds them up (for the midpoint element, update a pointer every 2 iters). – j_random_hacker Jul 15 '12 at 12:35
-
@j_random_hacker: any code reference (preferably C or C++)? I'd like to see how well this works out in practice... – Christoph Jul 15 '12 at 20:25
-
@Christoph: Wish I did! The only relevant thing I've found since thinking about this again is an "external quicksort" that uses what I'd call a "fat pivot": read in as many elements as will fit in RAM into a double-ended priority queue, then stream the remaining elements "through" it, inserting each one and picking off either the min or the max and writing that to 1 of 2 "sides" that will be recursively sorted later. But this is the only fragment I could find on it: http://xlinux.nist.gov/dads/HTML/externalQuicksort.html. I will look into it (someday...) – j_random_hacker Jul 15 '12 at 22:13
-
It is not true that quicksort is better. ALso, it depends on what you mean better, memory consumption, or speed.
In terms of memory consumption, in worst case, but quicksort can use n^2 memory (i.e. each partition is 1 to n-1), whereas merge sort uses nlogn.
The above follows in terms of speed.

- 39,603
- 20
- 94
- 123

- 215
- 3
- 1
-
Quicksort, in the worst case will use only O(n) memory. Yes, the time complexity can be O(n^2) in worst case. To understand, look at the max size of recursion stack. That is equal to the maximum number of levels in the DFS tree. Even if each partition is 1,n-1, the number of levels would be n (n, n-1, n-2, n-3 ... 1). – mlfan Mar 02 '17 at 06:21
quicksort is named so for a reason ,
highlights : both are stable sorts,(simply an implementation nuisance ) , so lets just move on to complexities
its very confusing with just the big-oh notations being spilled and "abused" , both have average case complexity of 0(nlogn) ,
but merge sort is always 0(nlogn) , whereas quicksort for bad partitions, ie skewed partitions like 1 element-10 element (which can happen due to sorted or reverse sorted list ) can lead to a 0(n^2)..
.. and so we have randomized quicksort , where we pick the pivot randomly and avoid such skewed partitioning , thereby nullifying the whole n^2 scenario anyway even for moderately skewed partitioning like 3-4 , we have a nlog(7/4)n, ideally we want 1-1 partion , thus the whole 2 of O(nlog(2)n).
so it is O(nlogn) , almost always and unlike merge sort the constants hidden under the "big-oh" notation are better for quicksort than for mergesort ..and it doesnt use up extra space like merge sort.
but getting quicksort run perfectly requires tweaking ,rephrase , quicksort provides you opportunities to tweak ....

- 109
- 8
-
quick sort can be made stable , check [this](http://stackoverflow.com/a/1675152/1779403) post. I hope that helps – idexi Aug 05 '14 at 12:15
-
The statement "unlike merge sort the constants hidden under the 'big-oh' notation are better for quicksort than for mergesort" doesn't qualify as an answer, and more importantly **it isn't even true**. Merge Sort will **always** have fewer comparisons than Quick Sort. Where does this constant number of comparisons you speak of come from? It doesn't exist *in either case*. Quick Sort has slightly better locality of reference by virtue of the fact that it's inline where as (an efficient implementation of) Merge Sort alternates back and forth between two arrays, and it saves a memory allocation. – M.J. Rayburn May 30 '22 at 02:15
The answer would slightly tilt towards quicksort w.r.t to changes brought with DualPivotQuickSort for primitive values . It is used in JAVA 7 to sort in java.util.Arrays
It is proved that for the Dual-Pivot Quicksort the average number of
comparisons is 2*n*ln(n), the average number of swaps is 0.8*n*ln(n),
whereas classical Quicksort algorithm has 2*n*ln(n) and 1*n*ln(n)
respectively. Full mathematical proof see in attached proof.txt
and proof_add.txt files. Theoretical results are also confirmed
by experimental counting of the operations.
You can find the JAVA7 implmentation here - http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/Arrays.java
Further Awesome Reading on DualPivotQuickSort - http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628

- 9,537
- 3
- 33
- 65
Quicksort is in place. You need very little extra memory. Which is extremely important.
Good choice of median makes it even more efficient but even a bad choice of median quarantees Theta(nlogn).

- 52,984
- 76
- 209
- 300
-
4But quicksort requires random access to all the data which might not be ideal for very large data sets – Martin Beckett Oct 13 '09 at 02:12