The C standard does not specify what format is used for the float
type, aside from some minimum requirements on it. The IEEE-754 binary32 type is commonly used. In this type, finite numbers are represented as ±2e•f, where e is an integer −126 ≤ e ≤ 127 and f is a 24-bit (not 23-bit) binary numeral with the radix point after the first digit (so d.ddd…ddd, where each d is a 0 or 1 and there are 24 of them).
F is called the significand. (Mantissa is an old term for the fraction portion of a logarithm. Significands are linear. Mantissas are logarithmic.)
The floating-point number is encoded into three fields S, E, and F:
- S is 0 for + and 1 for −.
- If the first bit of f is 1, E is e+127. If the first bit of f is 0, E is 0.
- F is the last 23 bits of f.
Note that the encoding cannot encode floating-point representations in which f starts with 0 unless e is −126. However, any such representation can be converted to an encodable representation by shifting bits in f left and decreasing e, until either the first bit of f is 1 or e is −126. Then the new representation represents the same number and is encodable.
Does this mean that I can represent precisely only real numbers with 23 bits of precision?
It means the only finite numbers that can be represented are those that can be represented with 24 or fewer bits and an exponent e in the range −126 ≤ e ≤ 127.
That is, if I would like to represent even an integer number that is greater than 2^23, its representation will include imprecisions?
Not always; 230 is representable, because it can be represented as +230•1.000000000000000000000002, and 230+27 is representable because it can be represented as +230•1.000000000000000000000012.
Integers over 224 can be represented as long as only 24 significant bits are needed to represent them and they are less than 2128.