I'd like to assign a value to a variable like this:
double var = 0xFFFFFFFF;
As a result var
gets the value 65535.0 assigned. Since the compiler assumes a 64bit target system the number literal (i.e. all respective 32 bits) is interpreted significand precision bits. However, since 0xFFFF FFFF
is just a notation for a bit pattern, without any hint about the representation, it could be quite differently interpreted w.r.t. becoming a floating point value. Thus, I was wondering if there is a way to manipulate this fixed interpretation of the value. In other words, give a hint about the desired representation. (Maybe someone could also point me to part in the standard where this implicit interpretation is defined).
So far, the default precision interpretation on my system seems to be
(int)0xFFFFFFFF x 100.
Only the fraction field is getting filled1.
So maybe (here: for 16 bit cross-compilation) I want it to be a different representation like:
(int)0xFFFFFF x 10(int)0xFF
(ignoring the sign bit for a moment).
Thus my question: How can I force a custom double interpretation of the hex literal notation?
1 Even when my hex literal would be 0xFFFF FFFF FFFF FFFF
the value is only interpreted as the fraction part - so clearly, bits should be used for exponent and sign field. But it seems the literal gets just cut off.