Update: I see what my problem is. I'm sorting 10000 datapoints every time. That will take the same time each iteration. t
was my incorrect way of attempting to sort 100 of those datapoints, then add the next 100 datapoints, sort and so on.
I am trying to show that the average/best case running time for Quicksort is O(nlogn). However when I try to time the method over a large amount of cycles I get linear run time.
I assume it has something to do with the way I'm implementing the for loop, but I have no idea why it would be wrong.
I also have another function that reads an array already sorted to give me worst case run time, but that also gives me linear.
For both functions, the times look like this:
System.out.println("Enter the number of data points you want to sort as an integer: ");
Scanner num = new Scanner(System.in);
int datapoints = num.nextInt();
int[] dpArr = new int[datapoints];
for (int i = 0; i < datapoints; i++)
{
dpArr[i] = (int) (Math.random() * 100000);
}
int n = dpArr.length;
try
{
PrintWriter pw = new PrintWriter(new FileWriter("random2.csv", true));
StringBuilder sb = new StringBuilder();
StringBuilder sb2 = new StringBuilder();
for (int t = 100; t <= 100000; t += 100)
{
long qstotal = 0;
long start = System.nanoTime();
quicksort(dpArr, 0, n - 1);
qstotal += System.nanoTime() - start;
double qsDuration = (double) qstotal / 1000000;
sb.setLength(0);
sb.append(t);
sb.append(',');
sb.append(qsDuration);
sb.append("\n");
pw.write(sb.toString());
}
for (int t = 100; t <= 100000; t += 100)
{
long rqstotal = 0;
long start = System.nanoTime();
randomQuicksort(dpArr, 0, n - 1);
rqstotal += System.nanoTime() - start;
double rqsDuration = (double) rqstotal / 1000000;
sb2.setLength(0);
sb2.append(t);
sb2.append(',');
sb2.append(rqsDuration);
sb2.append("\n");
pw.write(sb2.toString());
}
pw.flush();
pw.close();
}
catch (FileNotFoundException e)
{
System.out.println("File not found.");
}