When measuring time on the JVM with System.nanoTime()
you get a higher precision than with the std::chrono::high_resolution_clock
. How can that be and is there a cross platform way to get the same precision in C++ as on the JVM.
Examples:
Kotlin (JVM):
fun main(args: Array<String>) {
for (i in 0..10)
test() // warmup
println("Average resolution: ${test()}ns")
}
fun test(): Double {
val timeList = mutableListOf<Long>()
for (i in 0 until 10_000_000) {
val time = System.nanoTime()
if (timeList.isEmpty() || time != timeList.last())
timeList.add(time)
}
return timeList
.mapIndexed { i, l -> if (i > 0) l - timeList[i - 1] else null }
.filterNotNull()
.average()
}
Output: Average resolution: 433.37ns
C++:
#include <iostream>
#include <chrono>
#include <numeric>
#include <vector>
int main() {
using namespace std;
using namespace chrono;
vector<long long int> time_list;
for(int i = 0; i < 10'000'000; ++i) {
auto time = duration_cast<nanoseconds>(high_resolution_clock::now().time_since_epoch()).count();
if(time_list.empty() || time != time_list[time_list.size() - 1])
time_list.push_back(time);
}
adjacent_difference(time_list.begin(), time_list.end(), time_list.begin());
auto result = accumulate(time_list.begin() + 1, time_list.end(), 0.0) / (time_list.size() - 1);
printf("Average resolution: %.2fns", result);
return 0;
}
Output: Average resolution: 15625657.89ns
Edit: (MinGW g++)
Edit: Output: Average resolution: 444.88ns
(MSVC)
This was done on Windows, but on Linux I get similar results.
Edit:
Alright the original C++ was computed with MinGW and g++ after switching to MSVC I got on par results with the JVM (444.88ns).