On my CentOS 7 Intel 64 architecture, sizeof(float) is 4. So little endian result that I see on my test of 00 00 C8 C1 is a negative number. The Intel single precision floating point representation is:
1 bit sign
8 bit exponent
23 bit significand (with the first 1 bit implied)
As the Intel architecture is little-endian, the floating point value for 00 00 C8 C1 is 1100 0001 1100 1000 0000 0000 0000 0000
. The first 1 means the number is negative. The next 8 bits, 10000011
(Decimal 131), are the exponent, and the next 4 bits 1001
, with the implied 1 bit 11001
, is the number 25 shifted right 4 bits. The exponent of 131 is offset from 127 by 4, which is the number of bits that 1.1001
is shifted left to get back to 25.
On a 64 bit representation, the exponent is 11 bits, and the exponent offset is 1023. So you would expect the number to be 1 (negative sign), Decimal 1027 in 11 bits 100 0000 0011
, then 25 decimal as 1001
with the implied leading 1 bit (as in the single precision version), then all zeroes which together is C0 39 00 00 , 00 00 00 00
. You can see that the last 4 bytes are all zeros. But this is still little-endian, so as a 64 bit number it would look like 00 00 00 00 00 00 39 C0
. So you are getting all zeros if you print the first 4 bytes.
You would see non-zero values from your program either by (a) Specifying an 8 character array in the declaration and printing all 8 (and you would see two bytes with 39 C0), or (b) using a value other than -25 in your test that requires more binary digits to represent like a large prime number or an irrational number (as suggested by @David C. Rankin).
Checking sizeof(float) would determine what your floating point size (in bytes) and I would expect you to see it as 8, because you are seeing zeroes and not C8 C1
like I do.