The code below produces a different result on gcc when optimizations are turned on. Without optimizations gcc produces the same result as clang.
#include <cfloat>
#include <cstdint>
double big_dbl() { return DBL_MAX; }
int main() {
return static_cast<std::uint16_t>(big_dbl());
}
The assembler output for -O0
shows that both gcc and clang call empty_dbl
and use cvttsd2si
to do the floating point->int conversion followed by a conversion to a 16-bit unsigned value and produce 0
. Turning up the optimization level to -01
optimizes the empty_dbl
call away in both compilers but clang still gives 0
as the result of the cast whereas gcc gives 65535. The results can be seen with compiler explorer.
Is this a bug in the gcc optimizer or undefined behaviour so the optimizer can do what it likes?
This has bitten us recently, and in some sense thankfully, as it showed a scenario where big_dbl
was used incorrectly and we have fixed the code properly but I would like to understand the differences with gcc.