I implemented SelectionSort
algorithm in C++ both using simple arrays and vectors. Tested them with random samples in range [1, 65535]. Here are the results:
+-------------+-------------------+--------------------+
| Sample Size | Time for impArray | Time for impVector |
+-------------+-------------------+--------------------+
| 50 | 9.263 ns | 32.329 ns |
+-------------+-------------------+--------------------+
| 500 | 471.537 us | 3039.441 us |
+-------------+-------------------+--------------------+
| 5000 | 117.911 ms | 1202.402 ms |
+-------------+-------------------+--------------------+
If this were a time critical application with big sample sizes, implementations would differ a lot. I want to ask the reason of big execution time differences between two implementations?
- Used to compile and measure: https://www.onlinegdb.com/online_c++_compiler
Sorting.h
#ifndef SORTING_ALGORITHMS_H
#define SORTING_ALGORITHMS_H
#include <utility>
#include <vector>
//#include <iterator>
#include <algorithm>
namespace etpc
{
template <class T>
void sortSelection(T* pArrHead, int i32ArrSize)
{
int i32SmallestIndex;
for(int i=0; i<i32ArrSize-1; i++)
{
i32SmallestIndex = i;
for(int j=i+1; j<i32ArrSize; j++)
{
if(pArrHead[j] < pArrHead[i32SmallestIndex])
i32SmallestIndex = j;
}
if(i32SmallestIndex != i)
{
std::swap(pArrHead[i], pArrHead[i32SmallestIndex]);
}
}
}
// Based on the following answer
// https://codereview.stackexchange.com/a/177919/206877
// typename vs class
// https://stackoverflow.com/questions/2023977/difference-of-keywords-typename-and-class-in-templates
template <class T>
void sortSelection(std::vector<T>& vArrHead)
{
typename std::vector<T>::iterator it;
for(it = vArrHead.begin(); it != vArrHead.end(); it++)
{
std::iter_swap(it, std::min_element(it, vArrHead.end()));
}
}
}
#endif // SORTING_ALGORITHMS_H
Main.cpp
#include "Sorting.h"
#include <iostream>
#include <chrono>
template <class T>
void printArr(T* pArrBegin, int i32ArrSize)
{
for(int i=0; i<i32ArrSize; i++)
{
std::cout << pArrBegin[i] << " ";
}
std::cout << '\n';
}
template <class T>
void printVect(std::vector<T>& vArrHead)
{
for (auto elem : vArrHead)
std::cout << elem << " ";
std::cout << '\n';
}
int main()
{
std::chrono::steady_clock::time_point begin;
std::chrono::steady_clock::time_point end;
static const int i32Size = 1;
int arr[i32Size] = {0};
begin = std::chrono::steady_clock::now();
etpc::sortSelection<int>(arr, i32Size);
end = std::chrono::steady_clock::now();
std::cout << "Time difference impArray = " << std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin).count() << "[ns]" << std::endl;
//printArr<int>(arr, i32Size);
std::vector<int> v = {1};
begin = std::chrono::steady_clock::now();
etpc::sortSelection<int>(v);
end = std::chrono::steady_clock::now();
std::cout << "Time difference impVector = " << std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin).count() << "[ns]" << std::endl;
//printVect(v);
return 0;
}
Note that I used the following initializers:
static const int i32Size;
i32Size = 50;
i32Size = 500;
i32Size = 5000;
// Use the following link to access the array list
// https://repl.it/repls/BountifulAggravatingPetabyte
I ran the code with g++ -std=c++17 -O3 -Wall -Wextra -Werror main.cpp -o main
command with a machine with the following specs:
CPU: Single core Intel Xeon (-MCP-) cache: 46080 KB speed: 2300 MHz (max)
System: Host: erdem-1 Kernel: 5.0.0-1034-gcp x86_64 bits: 64 Console: tty 0
Distro: Ubuntu 18.04.4 LTS
It seems that implementations are quite equally efficient regarding execution time, not code clutter. The updated results are as follows:
+-------------+-------------------+--------------------+
| Sample Size | Time for impArray | Time for impVector |
+-------------+-------------------+--------------------+
| 50 | 3.154 ns | 3.499 ns |
+-------------+-------------------+--------------------+
| 500 | 136.928 us | 240.906 us |
+-------------+-------------------+--------------------+
| 5000 | 14.550 ms | 14.496 ms |
+-------------+-------------------+--------------------+