Your C implementation likely uses the IEEE-754 binary32 and binary64 formats for float
and double
. Given this, float f = 1.4769996;
results in setting f
to 1.47699964046478271484375, and f = 1.4759995;
results in setting f
to 1.47599947452545166015625.
Then it is easy to see that rounding 1.47699964046478271484375 to six digits after the decimal point results in 1.477000 (because the next digit is 6, so we round up), and rounding 1.47599947452545166015625 to six digits after the decimal point results in 1.475999 (because the next digit is 4, so we round down).
When working with floating-point numbers, it is important to understand each floating-point value represents one number exactly (unless it is a Not a Number [NaN] encoding). When you write 1.4769996
in source code, it is converted to a value representable in double
. When you assign it to a float
, it is converted to a value representable in float
. Operations on the floating-point object behave as if the object have exactly the value it represents, not as if its value is the numeral you wrote in source code.
To provide some further details, the C standard requires (in C 2018 7.21.6.1 13) that formatting with f
be correctly rounded if the number of digits requested is at most DECIMAL_DIG
. DECIMAL_DIG
is the number of decimal digits in the widest floating-point format the implementation supports such that converting any number in that format to a numeral with DECIMAL_DIG
significant decimal digits and back to the floating-point format yields the original value (5.2.4.2.2 12). DECIMAL_DIG
must be at least 10. If more than DECIMAL_DIG
digits are requested, the C standard allows some leeway in rounding. However, high-quality C implementations will round correctly as specified by IEEE-754 (to the nearest number with the requested number of digits, with ties favoring an even low digit).