4

The code below produces a different result on gcc when optimizations are turned on. Without optimizations gcc produces the same result as clang.

#include <cfloat>
#include <cstdint>

double big_dbl() { return DBL_MAX; }

int main() {
    return static_cast<std::uint16_t>(big_dbl());
}

The assembler output for -O0 shows that both gcc and clang call empty_dbl and use cvttsd2si to do the floating point->int conversion followed by a conversion to a 16-bit unsigned value and produce 0. Turning up the optimization level to -01 optimizes the empty_dbl call away in both compilers but clang still gives 0 as the result of the cast whereas gcc gives 65535. The results can be seen with compiler explorer.

Is this a bug in the gcc optimizer or undefined behaviour so the optimizer can do what it likes?

This has bitten us recently, and in some sense thankfully, as it showed a scenario where big_dbl was used incorrectly and we have fixed the code properly but I would like to understand the differences with gcc.

1 Answers1

6

Quote from the current draft standard:

A prvalue of a floating-point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.

So implementations can do whatever they like for your case, as the behavior is undefined.

geza
  • 28,403
  • 6
  • 61
  • 135