29

I have code that runs on different platforms that seems to get different results. I am looking for a proper explanation.

I expected casting to unsigned to work the same for float or double as for int1.

Windows :

double dbl = -123.45; 
int d_cast = (unsigned int)dbl; 
// d_cast == -123

WinCE (ARM):

double dbl = -123.45; 
int d_cast = (unsigned int)dbl; 
// d_cast == 0

EDIT:

Thanks for pointing in the right direction.

fix workaround

double dbl = -123.45; 
int d_cast = (unsigned)(int)dbl; 
// d_cast == -123
// works on both. 

Footnote 1: Editor's note: converting an out-of-range unsigned value to a signed type like int is implementation defined (not undefined). C17 § 6.3.1.3 - 3.

So the assignment to d_cast is also not nailed down by the standard for cases where (unsigned)dbl ends up being a huge positive value on some particular implementation. (That path of execution contains UB so ISO C is already out the window in theory). In practice compilers do what we expect on normal 2's complement machines and leave the bit-pattern unchanged.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
corn3lius
  • 4,857
  • 2
  • 31
  • 36
  • 11
    @DanielA.White: Why oh why would it have *anything* to do with endianness? There aren't even any pointers in the code. – Kerrek SB May 10 '12 at 19:58
  • 4
    What do you expect to happen when you cast a negative double to an unsigned int? – David Heffernan May 10 '12 at 19:59
  • 1
    possible duplicate of [iphone: floats cast to unsigned ints get set to 0 if they are negative?](http://stackoverflow.com/questions/2490600/iphone-floats-cast-to-unsigned-ints-get-set-to-0-if-they-are-negative) – Stephen Canon May 10 '12 at 20:18
  • @DavidHeffernan I would assume the integral part of the double would be truncated and cast as the unsigned type – corn3lius May 10 '12 at 20:24
  • 3
    Interesting. I personally wouldn't expect anything meaningful out of that operation. – David Heffernan May 10 '12 at 20:25

3 Answers3

35

No


This conversion is undefined and therefore not portable.

C99/C11 6.3.1.4

When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

According to C11 6.3.1.4 footnote 61:

The remaindering operation performed when a value of integer type is converted to unsigned type need not be performed when a value of real floating type is converted to unsigned type. Thus, the range of portable real floating values is (−1, Utype_MAX+1).

Lundin
  • 195,001
  • 40
  • 254
  • 396
DigitalRoss
  • 143,651
  • 25
  • 248
  • 329
  • 1
    yes but When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined (FROM THE C STANDARD) the integral part can be represented by the integer type – corn3lius May 10 '12 at 20:22
  • Yes, and footnote 50 seems to have been added specifically to prevent the interpretation you are suggesting. – DigitalRoss May 10 '12 at 20:45
  • 1
    Yes thank you, some times you need slam your head on the book a few time before it sinks in. – corn3lius May 10 '12 at 20:49
  • So, out of curiosity, what should one do if s/he thinks s/he might have a negative float and wants to convert it to an unsigned int? Is converting it to a signed int first, and then to an unsigned int safe? What's the right approach to take here? Thanks. – NHDaly Jul 12 '15 at 08:18
  • If the final value is unimportant for the out-of-range case, then just do the assignment. If the final value matters, then test it against the footnote 50 range before converting and do something (while it's still an FP value) that keeps the application happy. – DigitalRoss Jul 17 '15 at 21:47
  • @NHDaly: typically convert to signed `int` first if you don't need the upper half of the `unsigned` value range, or to `int64_t` if you might, before converting to `unsigned`. (Converting to signed `int64_t` is what compilers on x86-64 do in practice anyway for float -> uint32_t, so it's efficient there. `int64_t` should also be basically free on AArch64). On 2's complement machines, signed integer -> unsigned is just truncation of the bit pattern, or using it unmodified, i.e. has no cost in the asm. – Peter Cordes Mar 30 '20 at 09:05
  • I wonder if there are any platforms that would do something wonky (beyond yielding a possibly meaningless value) with values -1.0 and below which would not require special-case machine code to ensure that fractional values in the -1 to 0 range get converted to 0 rather than e.g. UINT_MAX? The Standard manages IMHO to both overspecify and underspecify conversion behavior. – supercat Jul 13 '22 at 15:24
0

C and C++ clearly should define (unsigned int)negativeFloat as equivalent to (unsigned int)(int)negativeFloat. Leaving it undefined is a disaster, especially now that x86 and ARM have diverged. ARM should follow x86 and work around this huge let-down in the language specification.

  • I think the Standard should specify that a conversion of a negative floating-point value x which is within the range +/-`UINT_MAX` to unsigned may yield either `(unsigned)(UINT_MAX+1.0-x)` or `-(unsigned)(-x)`, chosen in Unspecified fashion, which would be less specific than the Standard for cases in strictly between -1.0 and 0 (allowing them to yield either 0u or UINT_MAX) but would unambiguously define behavior for integer values in the indicated range, and would avoid UB for all values in that range. – supercat Jul 13 '22 at 15:17
0

If there were some platform where the most efficient method of converting values in the range strictly between -1 and UINT_MAX+1.0 would be incapable of behaving meaningfully with negative values, and whose customers wouldn't care about behavior with negative values, the authors of the Standard wouldn't have wanted to mandate that implementations use a less efficient means of performing the conversion. Further, implementations were probably inconsistent in how they handled conversion of negative fractional values [including, incidentally, those between 0 and -1]. For example, it's hardly implausible that an implementation might process someUnsigned = someFloat; as someUnsigned = (someFloat < 0) ? (UINT_MAX+1.0)+someFloat : someFloat;. The classification of the behavior as UB doesn't imply any judgment that (unsigned)(-1.0f) shouldn't be expected to yield UINT_MAX, but rather a waiver of judgment over which cases various kinds of implementations should or should not be expected to process predictably.

supercat
  • 77,689
  • 9
  • 166
  • 211