This is a very common misconception. A floating point number is an approximate representation of a real number. The most common standard for floating point (IEEE 754) uses base 2, and base 2 cannot directly represent all base 10 numbers.
This is nothing to do with Xcode
When you wrote 0.9
which is 9 * 10^-1
, the computer stored it as the closest binary equivalent, expressed in base 2. When this binary (base 2) approximation is converted back to decimal (base 10) for display, you get 0.899999976
which is as close as floating point could represent your number.
The standard way to compare floating point numbers is to choose a precision or tolerance, often called epsilon, which is how close two numbers are to be considered equal (ie. "close enough"). And because the closest approximation might be slightly lower or slightly higher than your number, you would take the absolute difference and compare to the tolerance. Thus:
const float eps = 0.00001f;
if (fabs(a - b) < eps)
{
// a and b are approximately equal
}
Floating point is a large and complicated topic, and is definitely worth researching to get a good grasp. Start here:
You should definitely read this fantastic introduction to floating point: