I'm in VC++2013, Windows 7-64, Intel i7 3.6 GHz.
I want to measure the execution time of very fast math operations, for example I wish to compare the performance of the standard fabsf()
function with alternate "faster" methods, or the standard tanh()
vs. the Pade approximation, etc.
The problem is that these operation are soooo fast that even though I run them a gazillion times, I always get 0 milliseconds between the end and the start of the benchmark.
I tried to get the time in nanoseconds using <chrono>
but it is rounded to a tenth of a millisecond, not really a nanosecond, so I still get 0 elapsed nanoseconds in my benchmark.
Can you please provide a snippet of code that I can use to run my benchmarks?
This is mine:
#include <vector>
#include <chrono>
#include <ctime>
using namespace std;
// 1/RAND_MAX
#define RAND_MAX_RECIP 0.00003051757f
int _tmain(int argc, _TCHAR* argv[])
{
srand (static_cast <unsigned> (time(0)));
// Fill a buffer with random float numbers
vector<float> buffer;
for (unsigned long i=0; i<10000000; ++i)
buffer.push_back( (float)rand() * RAND_MAX_RECIP );
// Get start time
auto start = std::chrono::high_resolution_clock::now();
for (unsigned long i=0; i<buffer.size(); ++i)
{
// do something with the float numbers in the buffer
}
// Get elapsed time
auto finish = std::chrono::high_resolution_clock::now();
printf("Executed in %d ns\n\n", std::chrono::duration_cast<std::chrono::nanoseconds>(finish-start).count());
return 0;
}