See Single-precision floating-point format for a details about a typical C float
.
Range of magnitude for float in c programming language?
#include <float.h>
printf("float magnitude range %e to %e\n", FLT_MIN, FLT_MAX);
// typical result
// float magnitude range 1.175494e-38 to 3.402823e+38
How is that possible since we have only 32-bits?
Typical float
does indeed only store about 232 different values. Yet they are not distributed linearly but logarithmically in linear groups.
223 different values in the range [2-126 to 2-127)
...
223 different values in the range [0.5 to 1.0)
223 different values in the range [1.0 to 2.0)
223 different values in the range [2.0 to 4.0)
...
223 different values in the range [2127 to 2128)
And their negative counter parts.
Also +/- zeros, small sub-normal numbers, +/- infinity and Not-a-Number
It also has to keep track of the sign , so how is it storing all this in just 32-bits?
1 bit for sign
8 bits for the binary exponent
23 bits for the significant (with a MSBit usually implied as 1)
--
32 bits
When printing a float
, only about 6 or so (FLT_DIG, FLT_DECIMAL_DIG
) significant digits usually are important.
printf("%.*e\n", FLT_DIG-1, FLT_TRUE_MIN);
printf("%.*e\n", FLT_DIG-1, FLT_MIN);
printf("%.*e\n", FLT_DIG-1, acos(-1));
printf("%.*e\n", FLT_DECIMAL_DIG - 1, nextafterf(FLT_MAX, 0.0));
printf("%.*e\n", FLT_DECIMAL_DIG - 1, FLT_MAX);
Output
1.40130e-45 // min sub-normal
1.17549e-38 // min normal
3.14159e+00 // pi
3.40282326e+38 // number before max
3.40282347e+38 // max