While I'm fairly aware of the fact that floating-point calculations suffer sometimes from accuracy issues, I'm not aware of the scientific reason behind it. To be precise, I'm not aware why some apparent straightforward roundings sometimes fail to be executed correctly. These three examples behave weird enough which makes me wonder whether a fair explanation exists:
alert(0.57 * 10000); // this is supposed to be 5700
alert(10 / 3); // this is supposed to be 3.33333333 (why the 5 at the end)
alert(2.34 / 100); // this is supposed to be 0.0234 (why the 7 at the end)
This popular article discusses floating points in details. I'm going to quote from it the following paragraph which looks like an explanation to my question. However, I was not able to understand what the paragraph wants to say:
Since rounding error is inherent in floating-point computation, it is important to have a way to measure this error. Consider the floating-point format with = 10 and p = 3, which will be used throughout this section. If the result of a floating-point computation is 3.12 × 10-2, and the answer when computed to infinite precision is .0314, it is clear that this is in error by 2 units in the last place. Similarly, if the real number .0314159 is represented as 3.14 × 10-2, then it is in error by .159 units in the last place. In general, if the floating-point number d.d...d × e is used to represent z, then it is in error by d.d...d - (z/e)p-1 units in the last place.4, 5 The term ulps will be used as shorthand for "units in the last place." If the result of a calculation is the floating-point number nearest to the correct result, it still might be in error by as much as .5 ulp. Another way to measure the difference between a floating-point number and the real number it is approximating is relative error, which is simply the difference between the two numbers divided by the real number. For example the relative error committed when approximating 3.14159 by 3.14 × 100 is .00159/3.14159 - .0005.
In short, while the regular Windows calculator computes everything correctly, why can't JavaScript engines behave the same way?