2

This is my code in c++. I have used c++11. is used to measure time in microseconds. My merge sort takes about 24 seconds to sort a randomly generated number array of size of 10 million. But when i refer to my friend's results they have got like 3 seconds. My code seems correct and the difference of mine and them is that they have used the clock in to measure time instead of chrono. Will this affect the deviation of my result? Please answer!

This is my code:

#include <iostream>
#include <climits>
#include<cstdlib>
#include<ctime>
#include<chrono>
using namespace std;

void merge_sort(long* inputArray,long low,long high);
void merge(long* inputArray,long low,long high,long mid);

int main(){
    srand(time(NULL));
    int n=1000;
    long* inputArray = new long[n];
    for (long i=0;i<n;i++){   //initialize the arrays of size n with random n numbers
        inputArray[i]=rand();  //Generate a random number
    }
    auto Start = std::chrono::high_resolution_clock::now(); //Set the start time for insertion_sort
    merge_sort(inputArray,0,n); //calling the insertion_sort to sort the array of size n
    auto End = std::chrono::high_resolution_clock::now(); //Set the end time for insertion_sort
    cout<<endl<<endl<<"Time taken for Merge Sort = "<<std::chrono::duration_cast<std::chrono::microseconds>(End-Start).count()<<" microseconds";  //Display the time taken for insertion sort
    delete []inputArray;
    return 0;
}

void merge_sort(long* inputArray,long low,long high){
    if (low<high){
        int mid =(low+high)/2;
        merge_sort(inputArray,low,mid);
        merge_sort(inputArray,mid+1,high);
        merge(inputArray,low,mid,high);
    }
    return;
}

void merge(long* inputArray,long low,long mid,long high){
    long n1 = mid-low+1;
    long n2 = high - mid;
    long *L= new long [n1+1];
    long *R=new long [n2+1];
    for (int i=0;i<=n1;i++){
        L[i] = inputArray[low+i];

    }
    for (int j=0;j<=n2;j++){
        R[j] = inputArray[mid+j+1];
    }
    L[n1]=INT_MAX ;
    R[n2]=INT_MAX;
    long i=0;
    long j=0;
    for (long k=low;k<=high;k++){
        if (L[i] <= R[j] ){
            inputArray[k]=L[i];
            i=i+1;
        }
        else{
            inputArray[k]=R[j];
            j=j+1;
        }
    }

    delete[] L;
    delete[] R;
}
Anton Savin
  • 40,838
  • 8
  • 54
  • 90
  • Why don't you use `std::sort` ? And on which operating system are you running, with which compiler and which optimization? See also [this thread](http://stackoverflow.com/q/27178789/841108) about non-conforming `clock` implementation on Windows... – Basile Starynkevitch Dec 01 '14 at 13:19
  • 1
    I would assume the point is to learn sorting algorithms. But are you machines and OS exactly the same? Try running each others code or try changing the timing method so that you have some sort of parity. – crashmstr Dec 01 '14 at 13:22
  • I am using intel i7 processor with 4GB RAM and os is windows 8 64 bit. –  Dec 02 '14 at 14:39

1 Answers1

3

No way two time measurements took 20 seconds.

As others pointed out, results would be really dependent on platform, compiler optimization (debug mode can be really slower than release mode) and so on.

If you have the same setting as your friends and still have performance issue, you may want to use a profiler to see where your code is spending time. You can use that tool if you are in linux otherwise visual studio in windows is a good candidate.

Community
  • 1
  • 1
geoalgo
  • 678
  • 3
  • 11
  • I am using windows and i used Codeblock as my compiler. The thing is in visual studio 2010 it doesnt support c++11 standards. so chrono cannot be used –  Dec 02 '14 at 14:41
  • 1
    I never used codeblock but it seems you can add profiler via pluggins, I saw this profiler that you may use: http://wiki.codeblocks.org/index.php?title=Code_Profiler_plugin. Otherwise, if you are just not using visual because of chrono, you can use boost timers instead to measure time as they are portative. – geoalgo Dec 02 '14 at 16:09