It seems quite straightforward to implement quicksort using bi-directional iterators with O(NlgN) time and O(lgN) space. So, what is the particular reason that std::sort()
requires random-access iterators?
I have read about the topic why do std::sort and partial_sort require random-access iterators?. But it doesn't explain in what specific part of a possible std::sort()
implementation that may actually require a random-access iterator to maintain its time and space complexity.
A possible implementation with O(NlgN) time and O(lgN) space:
template <typename BidirIt, typename Pred>
BidirIt partition(BidirIt first, BidirIt last, Pred pred) {
while (true) {
while (true) {
if (first == last) return first;
if (! pred(*first)) break;
++first;
}
while (true) {
if (first == --last) return first;
if (pred(*last)) break;
}
iter_swap(first, last);
++first;
}
}
template <typename BidirIt, typename Less = std::less<void>>
void sort(BidirIt first, BidirIt last, Less&& less = Less{}) {
using value_type = typename std::iterator_traits<BidirIt>::value_type;
using pair = std::pair<BidirIt, BidirIt>;
std::stack<pair> stk;
stk.emplace(first, last);
while (stk.size()) {
std::tie(first, last) = stk.top();
stk.pop();
if (first == last) continue;
auto prev_last = std::prev(last);
auto pivot = *prev_last;
auto mid = ::partition(first, prev_last,
[=](const value_type& val) {
return val < pivot;
});
std::iter_swap(mid, prev_last);
stk.emplace(first, mid);
stk.emplace(++mid, last);
}
}