Your initial rounding is working,1 in a sense. The problem is that 8.2 doesn't have a precise internal representation. If you just type 8.2
into irb or display the results of the #round(2)
method call, it looks like you have 8.2 but you don't. A number slightly smaller than 8.2 is actually stored.
You end up being defeated by the defaults of the output rounding logic. Once the internal slightly-less-than-8.2 bits are multipled, the error is shifted into the integer part of the number and this part won't be rounded unless you ask for it. You could do this: (a * 1000000).round
The problem is that we write the numbers in decimal but store them in binary. This works fine for integers; but it works poorly with fractions.
In fact, most of the decimal fractions we write cannot be represented exactly.
Every machine fraction is a rational number of the form x/2n. Now, the constants are decimal and every decimal constant is a rational number of the form x/(2n * 5m). The 5m numbers are odd, so there isn't a 2n factor for any of them. Only when m == 0 is there a finite representation in both the binary and decimal expansion of the fraction. So, 1.25 is exact because it's 5 / (22 * 50) but 0.1 is not because it's 1 / (20 * 51). In fact, in the series 1.01 .. 1.99 only 3 of the numbers are exactly representable: 1.25, 1.50, and 1.75.
Because 8.2 has no exact representation, it repeats in binary forever, never quite adding up to exactly 8.2. It goes on to infinity as 1100110011...
1. But note that you might have wanted a.round(1)
instead of 2. The parameter to #round
is the number of fraction digits you want, not the number of significant digits. In this case, the result was the same and it didn't matter.