0

I'm communicating serially between a host pc and an embedded processor. On the embedded side, I need to parse character strings for floating point and integer data. What I am currently doing is something along these lines:

inline float32* fp_unpack(float32* dest, volatile char* str) {
    Uint32 temp = (Uint32)str[3]<<24;
    temp |= (Uint32)str[2]<<16;
    temp |= (Uint32)str[1]<<8;
    temp |= (Uint32)str[0];
    temp = (float32)temp;
    *dest = (float32)temp;

    return dest;
}

Where str has four characters, each representing a byte of the float. The bytes in string are ordered little endian.

As an example, I'm trying to extract the number 100.0 from str. I've verified the contents of string are:

s[0]: 0x00, s[1]: 0x00, s[2]: 0x20, s[3]: 0x41,

which is the 32 bit floating point representation of 100.0. Furthermore, I've verified that the function successfully sets temp to 0x41200000. However, dest ends up being 0x4e824000. I know the problem arises from the line: *dest = (float32)temp, which I hoped would simply copy the bits from temp to dest, with a typecast to make the compiler happy.

However, I've realized that this won't be the case, since the operation: float x = (float)4/3 actually converts 4 to 4.0, ie changing the bits.

How do I coerce the bits in temp into dest?

Thanks in advance

edit: Note that 0x4120000 as an integer is 1092616192, which, as a float, is 0x4e82400

mskfisher
  • 3,291
  • 4
  • 35
  • 48
Trey
  • 348
  • 5
  • 16

4 Answers4

5

You need to cast the pointers. Casting the values simply converts the int to float. Try:

*dest = *((float32*)&temp);
andrewdski
  • 5,255
  • 2
  • 20
  • 20
  • Also I'm very suspicious of the line `temp = (float32)temp;` The best you could hope for would be no effect, but I think in fact it turn the value you've so carefully constructed into the `int` "100". – Ernest Friedman-Hill Jun 08 '11 at 15:27
  • 2
    This invokes undefined behavior and will result in incorrect code generation on modern compilers! – R.. GitHub STOP HELPING ICE Jun 08 '11 at 16:49
  • The behavior is certainly undefined. Interpreting the bits of an int as a float can't be defined portably, so the language must treat it as undefined. I'm not sure what you mean by incorrect code generation. Assuming that I know how the bits need to be laid out on my architecture, what will go wrong? (OTOH, the union answer below seems cleaner.) – andrewdski Jun 08 '11 at 22:12
  • The behavior is undefined because of violation of the aliasing rules. It's also true that, if the representations of `int` and `float` don't meet your expectations, you could create a trap representation resulting in an implementation-defined signal, but I think it's safe to say OP wants to assume the implementation defines the "usual" representations (IEEE single-precision `float` matching integer endianness). – R.. GitHub STOP HELPING ICE Jun 08 '11 at 23:33
  • Got it. You are talking about this: http://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule. – andrewdski Jun 09 '11 at 00:26
4

The portable way that does not invoke undefined behavior due to aliasing rules violations:

float f;
uint32_t i;
memcpy(&f, &i, sizeof f);
R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
3

Here is one more solution:

union test {
     float f;
     unsigned int i;
} x;

float flt = 100.0;
unsigned int uint;

x.f = flt;
uint = x.i;

Now unit has the bit pattern as it was in f.

phoxis
  • 60,131
  • 14
  • 81
  • 117
0

Isn't Hex (IEEE754 ) representation of float 100.0 -->0x42c80000

goldenmean
  • 18,376
  • 54
  • 154
  • 211