As we know, data type float in C comply with IEEE754.
The C standard does not require this. Many C implementations use IEEE-754 formats for floating-point types but do not fully comply with the standard. Some C implementations use formats for floating-point types other than the IEEE-754 types.
In that way, how many decimal digits can float represent at most without loss of accuracy?
This is not a clear question to ask. Consider the number 0.1. As a decimal numeral, it is represented with a single digit, but, as a binary numeral, it cannot be represented with any finite number of digits. It would be .0001100110011001100…
The IEEE-754 binary32 format, commonly used for float
, has sufficient precision that, if any decimal numeral with six significant digits is converted to the nearest value representable in then binary32 format, and that binary32 number is converted back to a decimal numeral rounded to six significant digits, the result will equal the original number, as long as the first conversion did not overflow or underflow the finite range of the format.