This is caused by the limited precision of floating point values. See The Floating Point Guide for full details.
The short version is that the 0.52
fractional part of your numbers cannot be represented exactly in binary, just like 1/3
cannot be represented exactly in decimal. Because of the limited number of digits of accuracy, the larger number is slightly more precise than the smaller one, and so is not exactly 100 times as large.
If that doesn't make sense, imagine you are dealing with thirds, and pretend that numbers are represented as decimals, to ten decimal places. If you declare:
var t1 = 1000.0 / 3.0;
var t2 = 10.0 / 3.0;
Then t2
is represented as 3.3333333333
, which is as close as can be represented with the given precision. Something that is 100 times as large as t2
would be 333.3333333300
, but t1
is actually represented as 333.3333333333
. It is not exactly 100 times t2
, due to rounding/truncation being applied at different points for the different numbers.
The fix, as always with floating-point rounding issues, is to use decimal types instead. Have a look at the Javascript cheat-sheet on the aforementioned guide for ways to go about this.