We're used to floats being inaccurate for obvious reasons.
The thing is that for the most part, those "obvious reasons" apply to decimal fractions, too.
You can't represent the fraction 1/3 as a decimal fraction. You can approximate it as 0.333, or 0.333333333, but no matter how many 3's you add at the end, it's never going to be exact. And if you multiply it by 3 again, you're liable to get 0.999999999, not 1.0.
You can't represent π = 3.141592654… exactly in either decimal or binary.
You can't represent √2 = 1.41421356… exactly in either decimal or binary.
You can't represent e = 2.718281828… exactly in either decimal or binary.
My point is that neither decimal nor binary has a monopoly on accuracy (or inaccuracy). It only seems like decimal is always right, and binary is often wrong, and the reason for that is just that we're so used to seeing decimal fractions, and we overlook their inaccuracies, but the inaccuracies that arise when we convert to/from binary always startle us.
Now, one way that decimal is "better" than binary is that, mathematically, there are no binary fractions that can't be converted exactly to decimal, while there are plenty of decimal fractions (most of them, actually) that can't be converted exactly to binary. That is, if you've got a binary fraction like 0b1.010101
, you can always convert it to an exact decimal fraction 1.328125, but if you've got even the simplest decimal fraction 0.1, when you try to convert it to binary you get an infinitely-repeating pattern 0b0.0001100110011…
.
But this is all sort of by way of background, and doesn't answer your other question. Why does 27*(3/9)
happen to give you an exact answer in binary, but not in decimal, even though 3/9
isn't representable exactly in either decimal or binary? And the answer is just that roundoff error is kind of random, and sometimes, two roundoff errors cancel each other out. In IEEE-754 floating point, which is what Python is probably using, the closest double-precision value to 3/9 is a 53-bit binary fraction which works out to exactly 0.333333333333333314829616256247390992939472198486328125 . When you multiply that number by 27, the exact answer would be 8.999999999999999500399638918679556809365749359130859375. IEEE-754 says that when you multiply, the result you get (if inexact) must be a correctly-rounded version of the exact result, and that number is close enough to 9.0 that it does indeed get rounded up.
I'm not sure how Python's Decimal type is implemented. Either it doesn't have the same rounded-actual-result guarantee, or it ends up happening that the exact result (in Decimal) is closer to 8.999999999999999999999999999 than it is to 9.0.
Footnote: I said that "roundoff error is kind of random", but that's not really true. A number theorist could tell us exactly which results are going to be exact and which are approximate, could tell us exactly when two roundoff errors would cancel each out and when they would persist. But I don't know enough about number theory to even try to make that argument.