for (var i = 0; i < 10; i += .1) {
}
console.log(i) === 10.09999999999998
BUT ...
for (var i = 0; i < 10; i += 1/8) {
}
console.log(i) === 10
Why is the result an integer when increment by 1/8?
for (var i = 0; i < 10; i += .1) {
}
console.log(i) === 10.09999999999998
BUT ...
for (var i = 0; i < 10; i += 1/8) {
}
console.log(i) === 10
Why is the result an integer when increment by 1/8?
Because 1/8 can be represented exactly as a base-2 (binary) fraction, but 0.1 cannot. 1/8 is 2 to the negative third power, but 0.1 is not 2 to any integer power. Floating-point values are stored in binary, so math on integer powers of two is more likely to return exact values than math on non-integer powers of 2.
That said, it is better to assume that no floating-point operation will be entirely exact. Different languages and processors may give different results, so don't count on that 1/8 summing working everywhere.