1

I am facing a problem running a C program on ARM processor.

#include <stdio.h>
#include <stdlib.h>

int main()
{
    float f = -40.000000;
    unsigned int a = f;

    printf("\t Actual TX Power = %0.2f dBm \n",
         (double)(int)a);

    return 0;
}

outputs:

on x86-        Actual TX Power = -40.00 dBm
on ARM-        Actual TX Power = -0.00 dBm

The value survives the round-trip on x86 but not ARM.

I am not able to figure out what is the issue. What could be the reason behind it.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
user2798118
  • 385
  • 1
  • 2
  • 16
  • It looks like the result depends on optimization level. `gcc -O1` on x86-64 makes it print `0.0` (not `-0.0`), while `-O0` prints `-40.0`. So there's an [MCVE] here just on x86 (32 or 64-bit code). – Peter Cordes Mar 30 '20 at 07:43
  • 2
    Converting a negative number to an `unsigned` gives you a large number. Converting that into `int` produces an overflow. You cannot expect any consistent behaviour from that. – Jens Gustedt Mar 30 '20 at 07:48
  • `clang -target aarch64 -c foo.c` (optimization disabled) and linking with `aarch64-linux-gnu-gcc foo.o` also produces `0.0` under `qemu-aarch64`, unlike on x86, so there's a cross-platform difference, too, within the same compiler. https://godbolt.org/z/HVmfPK shows AArch64 clang codegen. – Peter Cordes Mar 30 '20 at 07:55
  • Compilers can optimise this very hard: https://godbolt.org -ARM GCC 8.3.1 clocks the correct result as `0` and omits any floating point. – marko Mar 30 '20 at 08:12
  • 1
    @JensGustedt: that's not the actual problem. In real life on 2's complement systems, unsigned -> 2's complement conversion just uses the same bit-pattern unchanged. (It might be UB or unspecified in C, but in practice compilers let you get away with this.) Anyway, if you print `a`, you'll see that *it* is also `0`, because that's the nearest representable `unsigned` to `-40`. (AArch64 gcc uses `fcvtzu` to convert directly to unsigned with saturation to the value-range of unsigned, unlike x86 where it converts to signed 64-bit and takes the low half, if you disable optimization to hide UB.) – Peter Cordes Mar 30 '20 at 08:18
  • @PeterCordes, yes it is. Doing so is just undefined, and compilers do (or run into) code that may make no sense to you. – Jens Gustedt Mar 30 '20 at 09:25
  • @JensGustedt: Yes, it's UB in ISO C. That wasn't the point I was making, though. This program encounters UB *before* that, in converting `-40.0f` to `unsigned` (see dups). That doesn't "give a large number", it's UB on the spot. But the actual point I was making is that despite large unsigned -> int being UB, *that* UB doesn't explain the actual behaviour the OP is seeing. In practice you can always get away with round-tripping an unsigned to `int` and back on GCC for x86 and ARM. (Not that you should, just that it doesn't explain this interesting practical difference in what UB led to.) – Peter Cordes Mar 30 '20 at 09:37
  • @JensGustedt: correction: `unsigned`->`int` overflow is [*implementation defined*](https://stackoverflow.com/questions/32806954/is-conversion-between-exactly-sized-integers-always-completely-defined-by-the-st), not UB. [ISO C17 § 6.3.1.3 .3](https://web.archive.org/web/20181230041359/http://www.open-std.org/jtc1/sc22/wg14/www/abq/c17_updated_proposed_fdis.pdf#page=57) – Peter Cordes Mar 30 '20 at 12:13

0 Answers0