I found inventor of flashsort claiming that it is faster than quicksort for uniformly distributed data and most of real world data is close to uniform. Most of time while working on projects I found data are close to uniform. For example, I never encounter data like [1,3,500,10000,100000000,99999090990] on a daily basis. My data may look like [300,200,500,906,1500] and as n grows we will get more uniform data. After reading article I found that if any class contain too much data then we can recursively call flashsort to divide array further.
Am I missing something ? Why we are not using flashsort for integer data first and if it goes in worst case then we can use sort like introsort to avoid O(n^2) ?