Why is ordering the summands differently creating a different result (note this question is not about why 0.1 + 0.2 is not 0.3)?
15.2 + 30.7 + 3 = 48.9
15.2 + (30.7 + 3) = 48.900000000000006
Why is ordering the summands differently creating a different result (note this question is not about why 0.1 + 0.2 is not 0.3)?
15.2 + 30.7 + 3 = 48.9
15.2 + (30.7 + 3) = 48.900000000000006
It's because of the value of the intermediate results.
In the first example, it's done in this order:
15.2 + 30.7 = 45.9
45.9 + 3 = 48.9
and so we have an intermediate result of 45.9.
In the second example, it's done in this order:
30.7 + 3 = 33.7
15.2 + 33.7 = 48.9
...and so we have an intermediate result of 33.7.
Apparently, the imprecision we all know about (the old 0.1 + 0.2 thing) creeps in with the second and not the first. I'd assume that the intermediate value in the first case (33.7 or very close to it) is exactly representable, but that the intermediate value in the second case (48.9 or very close to it) is not; or at least that the first intermediate value is more precisely held than the second intermediate value.
The IEEE-754 standard requires that intermediate results be rounded to the nearest value the type can hold, rather than kept in a more precise form, apparently to increase compatibility between platforms. From this article (which is linked from this article that Paul Roub linked in a comment):
The IEEE standard requires that the result of addition, subtraction, multiplication and division be exactly rounded. That is, the result must be computed exactly and then rounded to the nearest floating-point number (using round to even)....
One reason for completely specifying the results of arithmetic operations is to improve the portability of software. When a program is moved between two machines and both support IEEE arithmetic, then if any intermediate result differs, it must be because of software bugs, not from differences in arithmetic.