2

Moved this question over to Programmers, as it didn't seem theoretical enough for CS.
TLDR
Has anyone tested dual pivot quicksort performance with expensive-to-swap elements? It seems that in this case, it should massively underperform standard quicksort.


Backstory
Inspired by recent "question" here on stack overflow, I decided to go and implement non trivial versions of given sorts (introsort, quicksort with 3-way partition, median of 3 pivot selection, small block insertion sort etc).

During some research I also came upon dual pivot quicksort, which is the current implementation of quicksort in Java standard library. Generally it claims that it is always at least as good as standard quicksort, and empirical testing seemed to support it. (Which is the reason it is the current implementation.)

However, it seems that no STL implementation uses dual pivot quicksort for the quicksort phase of introsort, which made me wonder why. After more research I found this paper. It says that while dual pivot quicksort performs on average 5% less comparisons, it performs significantly more swaps. (Approximately 80% more) Obviously, since Java has only primitives and reference types, swapping is always cheap. (Even so, it uses this sort only for primitives, because it is not stable)

So I wanted to see whether someone already tested standard quicksort vs dual pivot quicksort when elements are expensive to swap and has the numbers (and possibly source) lying around, or whether I will have to test this myself.

This question is specifically about quick sort variants.

Community
  • 1
  • 1
Xarn
  • 3,460
  • 1
  • 21
  • 43
  • 2
    Probably better suited for [Programmers](http://programmers.stackexchange.com/) or [CS](http://cs.stackexchange.com/). – Jonathon Reinhart Aug 14 '14 at 17:41
  • @JonathonReinhart Ok, Ill try to leave it here for a bit and move it over in the evening. (I don't see any way to actually move the question, instead of deleting it and opening new one, so I want to give it a chance here.) – Xarn Aug 14 '14 at 17:49
  • Is it also expensive to compare elements (for example long strings where many are the same for a significant number of leading characters)? – rcgldr Aug 14 '14 at 18:08
  • the performance of dual pivots quick sort depends on how to implement it. I have a java version for reference. https://github.com/BruceZu/sawdust/commit/6e7d1d40c056c33624b987ec38f9c47e741d16ce . It till has space to improve – Bruce Zu May 23 '16 at 21:17

3 Answers3

2

I have actually studied this extensively in my paper. https://arxiv.org/ftp/arxiv/papers/1505/1505.00558.pdf

The short answer is, NO. Dual Pivot doesn't perform as well when compared to a high end version of quicksort when swapping large elements. Look at figures 22 and 23.

S0lo
  • 36
  • 3
  • Interesting reading, but I want to note that quicksort has been traditionally in-place (this is after all the reason why Introsort switches to heapsort and not mergesort). – Xarn Sep 07 '16 at 19:35
  • Yes, in-place performs better most of the time. but notice also that even in-place uses "swaps", which uses 3 copy operations. I use a method that uses 2 copy operations (on the average). – S0lo Sep 09 '16 at 15:44
0

I think that there is 3 reasons why QUICKSORTYAROSLAVSKIY() of the paper is not fast.

  1. It is difficult to get a good pivot. Often 2 pivots are too near or too far.
  2. There is too many swap. For example, swapping in the line 12 is a partial array sliding.
  3. The insertion sort implemented is not practical. The Knuth algorithm of the paper might be written to teach. I don't like a partial array sliding.

A merit of QUICKSORTYAROSLAVSKIY() is that 2 pivots are excluded from partitioning.

I devised an algorithm to make quicksort faster. Because, swapping is a redundant method and a pivot should be chosen in various elements while N is big. More..

0

If your goal is to reduce the number of swaps, you should fall back to sorting pointers. Something like this:

void sort(vector<BigClass> &data) {
    //First get an array of pointers.
    vector<BigClass*> temp;
    temp.reserve(data.size());
    for(BigClass& current : data) temp.push_back(&current);

    //Sort.
    sort(temp.begin(), temp.end(), [](const BigClass* &a, const BigClass* &b){
        /* some lambda to compare the pointers by the objects they point to */
    });

    //Move the objects.
    vector<BigClass> result;
    result.reserve(data.size());
    for(BigClass* current : temp) result.push_back(*current);

    //Update the argument
    swap(result, data);
}

This is guaranteed to do precisely data.size() copy constructions. You can't do better.

cmaster - reinstate monica
  • 38,891
  • 9
  • 62
  • 106
  • Actually it is trivial to improve this, as you can skip over elements that are already in their sorted position. Also doesn't answer original question. – Xarn Nov 24 '14 at 10:59
  • 1
    No, it's not that trivial: When the function returns, its internal storage is a different one. It is the one that was allocated for the result array, into which *all* the elements have to be copied. You can't just skip a few elements, because they would be *missing* in the sorted array. – cmaster - reinstate monica Nov 25 '14 at 19:48