I'm having difficulties in understanding why this code results in 3f000000
.
float f = 5e-1;
printf("%x", *(int*)&f);
I'm having difficulties in understanding why this code results in 3f000000
.
float f = 5e-1;
printf("%x", *(int*)&f);
This is undefined behavior: the standard does not guarantee pointers to int
and float
to have the same alignment, so (int*)&f
cast is invalid (Q&A).
In your case, the cast produced a value that is consistent with IEEE-754 representation of 0.5 (5e-1) below:
bin: 0011 1111 0000 0000 0000 0000 0000 0000
+--- ---- -=== ==== ==== ==== ==== ====
hex: 3 f 0 0 0 0 0 0
where +
is the sign bit, -
are exponent bits, and =
are mantissa bits.
However, there is absolutely no guarantee that the same program is going to produce the same result when you run it on other systems, or even that the program is going to run to completion.
You may prefer
void dump(const void *data, size_t len) {
const unsigned char *x = data;
printf("%02x", x[0]);
for (size_t k = 1; k < len; k++) printf(" %02x", x[k]);
puts("");
}
And then
float f = 5e-1;
dump(&f, sizeof f);
The behaviour of your code is undefined due to (int*)&f
. The C-style cast is invalid as the types are unrelated.
If you want to inspect the memory associated with f
, then both C and C++ allow your to cast to const unsigned char*
, and track through the memory with pointer arithmetic up to sizeof(f)
.
Since you are new to C / C++
- I guess - you may not have any ideas with respect to undefined behaviors
. In short, there are some corners at which the C / C++
languages do not define
what should happen exactly. Why? There are a lot of reasons behind, e.g. performance. In those cases, the results or effects of the behaviors are completely rely on implementations (compilers - more specifically (e.g. gcc, vc++)).
What you are doing here is listed at item 7 of the accepted answer to What are all the common undefined behaviours that a C++ programmer should know about?
Others have already addressed the undefined behavior so I'll only address this part:
I'm having difficulties in understanding why this code results in 3f000000
The code tries to print the binary representation of the float value 0.5 (or 5e-1).
(it does that in an illegal way though - see other answers for the correct way - see this answer https://stackoverflow.com/a/45036945/4386427 )
The explanation for the value 3f000000 is that your system seem to use IEEE 754 single-precision binary floating-point format (see https://en.wikipedia.org/wiki/Single-precision_floating-point_format).
The format uses
So in your case
In general the value is calculated as:
value = (-1)^sign * (1 + fraction) * 2^(exponent - 127)
Since sign and fraction is 0, it is pretty easy to calculate the value:
value = 1 * 1 * 2^(126 - 127) = 2^-1 = 0.5
So with IEEE 754 single-precision binary floating-point a float with value 0.5 is stored with the binary pattern 3f000000