I understand that floating point precision has only so many bits. It comes as no surprise that the following code thinks that (float)(UINT64_MAX)
and (float)(UINT64_MAX - 1)
are equal. I am trying to write a function which would detect this type of, for a lack of proper term, "conversion overflow". I thought I could somehow use FLT_MAX
but that's not correct. What's the right way to do this?
#include <iostream>
#include <cstdint>
int main()
{
uint64_t x1(UINT64_MAX);
uint64_t x2(UINT64_MAX - 1);
float f1(static_cast<float>(x1));
float f2(static_cast<float>(x2));
std::cout << f1 << " == " << f2 << " = " << (f1 == f2) << std::endl;
return 0;
}