Since double
is wider than float
, x == 0.1
is interpreted as (double) x == 0.1
.
This works for 0.5 because 0.5 is exactly representable in binary, so (double) 0.5f
produces precisely 0.5. On the other hand, 0.1 has an infinite-digit representation in binary, and 0.1f and 0.1 end up being rounded to numbers that differ in how many initial digits of the sequence they hold.
In an analogy with decimal numbers, you can think of the above situation as trying to write down the fraction 1/3 by rounding it to a fixed number of decimal digits. Using a 5-significant-digit representation, you get 0.33333; choosing a 10-digit one results in 0.3333333333. Now, "casting" the five-digit number to ten digits results in 0.3333300000, which is a different number than 0.3333333333. In the same analogy, 0.5 in is like 1/10 in decimal, which would be represented as 0.10000 and 0.1000000000 respectively, so one could convert it to the other representation and back without changing its meaning.
If the contents of x
is a marker value set from code, then simply compare it to 0.1f
instead of to 0.1
. If it is the result of a calculation, see Paul's answer for the correct way to compare floating-point quantities.