Floating point types are not always capable of accurately representing their exact decimal values. It's either a "known bug" or "by design", depending on your perspective. Either way, it's a consequence of the internal representation of floating point types, and a common source of bugs.
The problem is nearly unavoidable, too, short of writing a complex computer algebra system that represents values symbolically, rather than as numeric types. Open up Windows Calculator, determine the square root of 4, and then subtract 2 from that value. You'll get some nonsensical floating point number that is incredibly close to 0, but not exactly 0. The result of your square root computation wasn't stored as exactly 2, so when you subtract exactly 2 from it, you get an "unexpected" result. Unexpected, that is, unless you know the dirty little secret about base 2 arithmetic.
If you're curious, there are several places you might go to find out more information about why this is the case. Jon Skeet wrote an article explaining binary floating point operations in the context of the .NET Framework. If you have the time, you should also peruse the aptly-named publication, What Every Computer Scientist Should Know About Floating-Point Arithmetic.
But the bottom line is that you shouldn't expect to be able to compare the result of a floating point operation to a floating point literal. In this specific case, you might try using the decimal
type, instead. It's not really a "solution" (see the other answers for those, scary mathematical concepts like epsilons), but the results are often more predictable, as the decimal
type is better at accurately representing base 10 numbers (such as those used in currency and financial calculations).