I am studying the IEEE 754 that stipulates 4 types floating-point number, formatted, unformatted, infinity and NAN. In my C program, I test what will happen if dividing 0 and print the bytes of the result:
#include<stdio.h>
void show_bytes();
int main()
{
float n = 2.5/0;
char* p = (char*)&n;
int len = sizeof(n);
printf("%d\n", len);
show_bytes(p, len);
printf("\n");
}
void show_bytes(char* p, int len)
{
for(int i = 0; i < len; ++i)
{
printf("%.2x\t", *p);
++p;
}
}
But the result is
4
00 00 ffffff80 7f
I cannot understand why there are four bytes at the third byte position. The sizeof(n) == 4, but the length of printed bytes is 7.
CPU Info
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Celeron(R) CPU 3865U @ 1.80GHz
Stepping: 9
CPU MHz: 1099.951
CPU max MHz: 1800.0000
CPU min MHz: 400.0000
BogoMIPS: 3600.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 2048K
NUMA node0 CPU(s): 0,1
GCC Version: gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)