0

Title. Need to put bridgeSymbol hex value into myVariable memory buffer. I've tried every cast that came to my mind (bit_cast, reinterpret_cast).

Expected result should be hex value of (float)bridgeSymbol at myVariable pointer address.

What I'm missing?

uintptr_t myVariable = 0xC70BBF5C;

float bridgeSymbol = *(float*)(&myVariable);        //Big endian -35775.36 OK!
bridgeSymbol = bridgeSymbol / 10;                   //some random operation  = -3577.5

myVariable = (uintptr_t)bridgeSymbol;               //expected 0xc55f9800 but getting random values

Edit 1: More detailed explanation as suggested.

gl4ssiest
  • 167
  • 8
  • `//NOT WORKING` -- You need to be more descriptive as to what you mean by "not working". – PaulMcKenzie Dec 13 '19 at 19:33
  • Edited as suggested. Ty! – gl4ssiest Dec 13 '19 at 19:46
  • What you'll get is -3577.5 turned into an unsigned integer. Completely different bit pattern. Probably 0xFFFFFFFFFFFFF207, but I can't be sure. You want it to be an integer containing the correct bit pattern for a float, yes? – user4581301 Dec 13 '19 at 19:47
  • 2
    Casting a `float` to an integer type value does not return the bit pattern of the `float` - it returns the value of the `float` truncated to an integer. And `*(float*)(&myVariable);` is a hideous [strict aliasing violation](https://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule) that can also lead to undefined behavior should `myvariable` not be properly aligned for a `float` value, or if `float` and `uintptr_t` are different sizes. – Andrew Henle Dec 13 '19 at 19:51
  • To get what I think you want, you have to do the same voodoo you did with `float bridgeSymbol = *(float*)(&myVariable);`, but in the other direction. You probably tried `myVariable = *(uintptr_t*)&bridgeSymbol;` at some point, but ran into the strict aliasing problem Andrew discusses above. Because this is all rule-breaking, there is no canonical correct answer. – user4581301 Dec 13 '19 at 19:56
  • @user4581301 As you stated I need to get a float pattern into memory. myVariable initial value came directly from memory, and only become meaningful casting it as a float. What should be the right way to operate that value? – gl4ssiest Dec 13 '19 at 20:15
  • Well. No need for that comment. Selbie's answer raised most of the same points AND explained them. – user4581301 Dec 13 '19 at 20:24

1 Answers1

4

Here's something closer to what you want.

 static_assert(sizeof(float) == 4);
 myVariable = *(uint32_t*)(&bridgeSymbol);

Above is basically saying, treat the 4 bytes of memory occupied by the float as a 32-bit uint, and then assign back to myVariable. It's literally the opposite of what you are doing to convert the original value into a float.

The trouble is that your myVariable is declared as uintptr_t, which is 32-bit on a 32-bit platform. But as soon as your code compiles for 64-bit, it will be a 64-bit number.

A float is almost always 32-bit on any architecture. So I'd recommend you declare myVariable as a uin32_t as well.

selbie
  • 100,020
  • 15
  • 103
  • 173