Why does it do this when both numbers are the same?
The numbers are not the same value.
double
can represent exactly typically about 264 different values.
float
can represent exactly typically about 232 different values.
2.7 is not one of the values - some approximations are made given the binary nature of floating-point encoding versus the decimal text 2.7
The compiler converts 2.7
to the nearest representable double
or
2.70000000000000017763568394002504646778106689453125
given the typical binary64 representation of double
.
The next best double
is
2.699999999999999733546474089962430298328399658203125.
Knowing the exact value beyond 17 significant digits has reduced usefulness.
When the value is assigned to a float
, it becomes the nearest representable float
or
2.7000000476837158203125.
2.70000000000000017763568394002504646778106689453125 does not equal 2.7000000476837158203125.
Should a compiler use a float/double
which represents 2.7
exactly like with decimal32/decimal64, code would work as OP expected. double
representation using an underlying decimal format is rare. Far more often the underlying double
representation is base 2 and these conversion artifacts need to be considered when programming.
Had the code been float f = 2.5;
, the float
and double
value, using a binary or decimal underlying format, would have made if (f == 2.5)
true. The same value as a high precision double
is representable exactly as a low precision float
.
(Assuming binary32/binary64 floating point)
double
has 53 bits of significance and float
has 24. The key is that if the number as a double
has its least significant (53-24) bits set to 0, when converted to float
, it will have the same numeric value as a float
or double
. Numbers like 1
, 2.5
and 2.7000000476837158203125
fulfill that. (range, sub-normal, and NaN issues ignored here.)
This is a reason why exact floating point comparisons are typically only done in select situations.