I couldn't find any question about this, so I decided to post. Basically, I just noticed that whenever I initialize float or double variables, the actual values assigned are approximations of what I specified.
Ex:
float dx = 7.1; //upon debugging this is: 7.09999990
float dy = 9.9; //upon debugging this is: 9.89999962
I realize that they're approximations of what I asked, and that in a real calculation it's absolutely understood that the result would be a calculation, but I'm wondering and sort of expected the aproximation to be 7.10000000
And also, if I use the constant 7.1 or 7.10000000 in something like if (7.10000000 == 7.1), it seems literal constant is approximated to 7.0999999999999996. (I guess this is a bit obvious now, it's first aproximated to a double and not a float; so this isn't a mystery anymore)
So I guess my question is why is 7.1 approximated to 7.09999990, and not even 7.09999999 which is closer to 7.1 than 7.09999990. No? Am I missing something?