I implement this merge sort based on pseudo-code available on Cormen's Book. I'm reproducing it, since it's short:
void merge(vector<double> &array, int start, int mid, int end) {
int i = start;
int j = mid + 1;
int k = start;
vector<double> b(array.size());
while (i <= mid && j <= end) {
if (array[i] <= array[j])
b[k++] = array[i++];
else
b[k++] = array[j++];
}
while(i <= mid)
b[k++] = array[i++];
while(j <= end)
b[k++] = array[j++];
for (k = start; k <= end; k++)
array[k] = b[k];
}
This part should be O(n)
and this other should be O(n*lg n) , where lg is log on 2 base
void mergeSort(vector<double> &array, int start, int end) {
if (start < end) {
int mid = (end - start) / 2 + start;
mergeSort(array, start, mid);
mergeSort(array, mid + 1, end);
merge(array, start, mid, end);
}
}
I made some experiments with random vector instances of sizes: 1000(10^3), 10000(10^4), 50000(5*10^4), 100000(10^5), 250000(2.5*10^5), 500000(5*10^5). With 30 instances in each size. This are the average result I had for each instance size:
1000 - ~0.000 s
10000 - 0.344 s
50000 - 20.456 s
100000 - 59.083 s
250000 - 360.814 s
500000 - 1729.245 s
All elapsed times I take from linux time command (take users time) when running my merge sort. As visible, it's not a O(n*lg n) behavior. What I'm missing here? I don't know if it's relevant but my system configurations are:
OS: Fedora 18 - 64 bits
Processor: Intel® Core™ i3 CPU M 380 @ 2.53GHz × 4
Memory: 2.8 Gi
B