4
for (var i = 0; i < 10; i += .1) {
}

console.log(i) === 10.09999999999998 

BUT ...

for (var i = 0; i < 10; i += 1/8) {
}

console.log(i) === 10

Why is the result an integer when increment by 1/8?

Doray_Hong
  • 43
  • 3
  • Because that's how computers work. Please let me find a useful dupe. – Álvaro González Dec 28 '15 at 13:15
  • 1
    You can refer following post [Floating point inaccuracy examples](http://stackoverflow.com/questions/2100490/floating-point-inaccuracy-examples) – Rajesh Dec 28 '15 at 13:15
  • 1
    Why wouldn't it be `1/8 === 0.125`, and `0.125 * 8 === 1`, so you'd get `9.5`, `9.625`, `9.75` and finally `9.875`, after that you'd get `10`, which doesn't pass the condition? Isn't it more interesting that you get `10.09999999999998` in the first one ? – adeneo Dec 28 '15 at 13:15
  • assign a variable with value of `i`, say `j`, now `j` has the value as `9.875`! incredible! – Thamilhan Dec 28 '15 at 13:18
  • FYI, by JavaScript standards, the result in the second case is not actually an integer. `Number` in Javascript is defined as roughly equivalent to a C `double`, except when using bitwise operations. It just happens to omit the decimal when the value is exact. – ShadowRanger Dec 28 '15 at 13:50

1 Answers1

5

Because 1/8 can be represented exactly as a base-2 (binary) fraction, but 0.1 cannot. 1/8 is 2 to the negative third power, but 0.1 is not 2 to any integer power. Floating-point values are stored in binary, so math on integer powers of two is more likely to return exact values than math on non-integer powers of 2.

That said, it is better to assume that no floating-point operation will be entirely exact. Different languages and processors may give different results, so don't count on that 1/8 summing working everywhere.

cxw
  • 16,685
  • 2
  • 45
  • 81