I have many (hundreds of thousands, m) sets of doubles d, ~5-10 (n, constant small) long. These doubles are essentially randomly distributed. I need to get the median of each set: because m is very large, we need to calculate the median pretty quickly...these sets are pretty small though, so I think that is going to play a significant role in choosing how to do the median. I know I can use nth_element to get the median in O(n) with the selection algorithm, which I know I'm not going to beat in complexity. However, because of the small constant n, I am probably looking for the method that simply has the smallest overhead.
I have found a bunch of different ways to do the median (below) but am just curios if anyone knows the "correct" method to use here.
Min max heaps (O(n) build time, constant access, probably too much overhead)
This question from 2010 Which may be out of date (new STL/Boost code may already implement this stuff), also focuses more on time complexity than overhead.