-2

I have read about Block sort algorithm and wondering now why it is not popular as for example quicksort, even though it is better than quicksort according to table in this wikipedia page. Block sort algorithm's complexity worst case scenario is nlog(n) and memory complexity is constant, whereas quicksort have n^2 and log(n) respectively. Thanks in advance.

BatyaGG
  • 674
  • 1
  • 7
  • 19
  • Those are time complexity O(f(n)) entries in the table. Time complexity does not include lower order terms such as constant factors. For example, you could have two sort algorithms, both with the same time complexity O(n log(n)), but one of them could be 4 times as slow as the other, since the constant factor of 4 is not included in time complexity. – rcgldr Dec 09 '17 at 17:08
  • (1) It's really improbable to get to quicksort's worst case with a proper implementation. (2) You can't say algorithms have about the same running time when they have the same time complexity. Constant factors, which big-O ignores by definition, is a huge deal. Read [What is a plain English explanation of "Big O" notation?](https://stackoverflow.com/q/487258) – Bernhard Barker Dec 10 '17 at 00:19
  • Why are you downvoting me? – BatyaGG Dec 11 '17 at 14:27

1 Answers1

3

QuickSort is faster than mergesort because it has no loop in it's recursion where mergesort has to copy it's element in aux array and one more thing:

O(NlogN) is number of compares that mergesort does but it does also 6NlogN array acceses.

Quicksort uses NlogN but if you always shuffle before sorting then you can get some probablistic guarantee that you never has a worst case.

One thing i would say that Big-O notation is not a perfect way to compare two sorting algorithms running time. As i said Quicksort uses NlogN compares to sort an array but this is only true if i use BigO notation , with tilde notation quicksort makes ~1.39NlogN compares.

Luai Ghunim
  • 976
  • 7
  • 14
  • In the case of an reasonably optimized merge sort doesn't do copy back, the number of accesses is 3 n log(n). If the two elements being compared fit in registers, then an optimizing compiler will not re-read the element that is being moved, only write the previously read element from a register, in which case the number of array accesses is 2 n log(n). Merge sort does more moves, but fewer compares than quick sort. If sorting an array of pointers to objects, merge sort is usually faster because pointers are moved and objects compared. – rcgldr Dec 09 '17 at 19:56
  • It's not good to compare a 4-way mergesort to simple QuickSort, you are trying to improve Mergesort and compare it with simple quick sort, you can use insertion sort with quick sort for smaller array, you can also use 3-way quick sort. – Luai Ghunim Dec 09 '17 at 20:16
  • Check also the locality of reference of quick sort. You can't add everything in registers, you need a heap or stack datastructure for return adresses of recursive calls and also the elements from previous calls. – Luai Ghunim Dec 09 '17 at 20:21
  • @rcgldr Top-Down mergeSort still uses 6NlogN array acceses in worst-case , i can prove :) – Luai Ghunim Dec 09 '17 at 20:41
  • Btw my point is that QuickSort is faster than MergeSort because it has very few instruction in it's inner loop & with 3-way partitioning it quick sort becomes linear for certain key distribution where other sorts are linearithmic :) – Luai Ghunim Dec 09 '17 at 20:44
  • A Java example of top down merge sort with 3NlogN array access is included in the second half of [this answer](https://stackoverflow.com/questions/46106922/mergesort-algorithm-whats-the-better-implementation-in-java/46107386#46107386) from a prior thread. If an array element fits in a register, then this becomes 2NlogN accesses for most optimizing compilers. Copy back is avoided by using a pair of mutually recursive functions to change the direction of merge based on level of recursion. (I deleted some of my prior comments as they are no longer needed). – rcgldr Dec 09 '17 at 23:51