I ran into a performance degradation in one of my applications that I pinpointed to the generation of random data. I wrote a simple benchmark that essentially does the same:
#include <chrono>
#include <iostream>
#include <random>
std::mt19937 random_engine{std::random_device()()};
// Generate one million random numbers
template <typename T, typename Distribution>
std::vector<T> generate_random(Distribution distribution) {
std::vector<T> data(1000000);
std::generate_n(data.begin(), 1000000, [&]() {
return static_cast<T>(distribution(random_engine));
});
return data;
}
template <typename T>
std::vector<T> create_data() {
if constexpr (std::is_same_v<T, float>)
return generate_random<float>(
std::uniform_real_distribution<float>(-127.0f, 127.0f));
if constexpr (std::is_same_v<T, int8_t>)
return generate_random<int8_t>(
std::uniform_int_distribution<int32_t>(-127, 127));
}
int main() {
auto start = std::chrono::system_clock::now();
auto float_data = create_data<float>();
std::cout << "Time (float): " << (std::chrono::system_clock::now() - start).count()
<< '\n';
start = std::chrono::system_clock::now();
auto int8_data = create_data<int8_t>();
std::cout << "Time (int8): " << (std::chrono::system_clock::now() - start).count()
<< '\n';
return 0;
}
On my machine this outputs:
〉g++ -v
...
Apple clang version 11.0.3 (clang-1103.0.32.29)
Target: x86_64-apple-darwin19.5.0
...
〉g++ tmp.cpp -std=c++17 -O3 && ./a.out
Time (float): 68033
Time (int8): 172771
Why does sampling from the real distribution take less time than from the int distribution?
UPDATE
libc++ and libstdc++ show completely opposite behaviour. I'm still looking into where the difference in implementation lies. See libc++ vs. libstdc++