1

While working with Berkeley TestFloat I've noticed that testing of floating-point to integer conversion is done by checking (in particular) a specific result values:

// file: test_a_f32_z_i32_rx.c
// function: test_a_f32_z_i32_rx
        ...
        if ( (trueZ != subjZ) || (trueFlags != subjFlags) ) {
            if (
                   verCases_checkInvInts
                || (trueFlags != softfloat_flag_invalid)
                || (subjFlags != softfloat_flag_invalid)
                || ((subjZ != 0x7FFFFFFF) && (subjZ != -0x7FFFFFFF - 1)
                        && (! f32_isNaN( genCases_f32_a ) || (subjZ != 0)))
            ) {
                ++verCases_errorCount;

Here we see the specific result values: 0, 0x7FFFFFFF (max. signed integer), -0x7FFFFFFF - 1 (min. signed integer).

However, per C11 (and higher) standard conversion of out of range floating-point to integer leads to UB. Since usually (int)f leads to generation of hardware instruction (e.g. cvttss2si for x86_64), I wonder: is there any hardware which converts out of range floating-point to non-zero/min/max integer?


Extra: Why converting 'out of range integer to integer' leads to IB, but converting 'out of range floating-point to integer' leads to UB?

pmor
  • 5,392
  • 4
  • 17
  • 36

1 Answers1

0

The C standard, from C11 onward, treats converting out-of-range floating-point values to integers as undefined behavior (UB). This means compilers aren't bound to specific behavior, aiming to prevent cross-platform issues. While some hardware might handle such conversions uniquely, this isn't standardized or common. The Berkeley TestFloat framework tests compliance with standards, using specific values for testing, but these may not reflect all hardware behavior. Prioritizing portability, follow the C standard's rules and avoid assumptions about hardware behavior. Handle out-of-range cases explicitly or use library functions like isfinite to ensure well-defined behavior before conversions.

prabu naresh
  • 405
  • 1
  • 10