1

Suppose I have a sum as follows, where n is any positive integer.

let sum = 0;
for (let i = 0; i < n; ++i) {
  sum += (1/n);
}

This sum should always equal 1, but with JS floating point weirdness, it sometimes does not. Thus, I have been using Number.EPSILON when comparing sum to 1, as follows:

function sumIsEqualToOne(sum) {
  return Math.abs(1 - sum) < Number.EPSILON;
}

per the example on MDN (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/EPSILON).

This doesn't always work though, as for n = 7, 1 - sum === Number.EPSILON, so sumIsEqualToOne returns false.

Is my original assumption incorrect, and sum is not actually always equal to 1 here? Does (1/7)+(1/7)+(1/7)+(1/7)+(1/7)+(1/7)+(1/7) !== 1?

sir_thursday
  • 5,270
  • 12
  • 64
  • 118
  • Just use `Math.abs(1 - sum) <= Number.EPSILON;` –  Feb 18 '19 at 16:26
  • 1
    you are adding the error. in this case it is 7 * error to check against. – Nina Scholz Feb 18 '19 at 16:27
  • 1
    @nina In my eyes that would be the actual answer to this question, the duplicate is a bit too broad – Jonas Wilms Feb 18 '19 at 16:32
  • What do you mean "you are adding the error"? – sir_thursday Feb 18 '19 at 16:34
  • if you look at the first result of `1 / n` and assume the error is the max possible error, then by multiplying or adding the same value, the error of the sum is the product of the the first error and the factor. `Number.EPSILON` is only the error of one operation. – Nina Scholz Feb 18 '19 at 16:39
  • If you need this `sum += (1/n);` to always equals `1` there is a way to do it using a mathematical correction factor. You will need to calculate a number that is power of `10` that will transform the `1/n` decimal number into an integer, so the arithmetical operations performs between integers. Then you finally divide your integer sum by this number again. I tried to make an explanation but unfortunately the question was closed. For an idea of how to use, you can check https://stackoverflow.com/questions/54433007/number-integer-or-decimal-to-array-array-to-number-integer-or-decimal-witho – Shidersz Feb 18 '19 at 16:53
  • @NinaScholz: While there is a rounding error in computing `1/n`, and this rounding error participates in the sum n times, that does not make the error in the sum n times that rounding error. There are n-1 additional rounding errors in the n-1 additions, and they each may have rounding errors, each of which may make the accumulated error greater or lesser. – Eric Postpischil Feb 18 '19 at 18:30
  • @Shidersz: No power of ten will, when multiplied by 1/7, convert 1/7 to an integer. – Eric Postpischil Feb 18 '19 at 18:31
  • What is the real problem you are trying to solve here? You have given us an example that is trivial: To correctly compute the sum of `1/n` added `n` times, simply return 1. Presumably there is some other calculation you are trying to perform. The art and science of analyzing and reducing errors in numerical computations is extensive. There is no general way to compute something without error or with arbitrarily limited error. If you explain what you really need to compute, it might be possible to reopen the question and give some recommendations or to refer to useful related questions. – Eric Postpischil Feb 18 '19 at 18:34
  • @EricPostpischil if you take a calculated value with a given error and you add this value n times, you add this error with the error of floating point and the own error. the result is an error which is greater than the floating point error of a single operation. – Nina Scholz Feb 18 '19 at 18:35
  • @NinaScholz: Sometimes one error is in a positive direction and the other error is in a negative direction, with the result that they partially or completely cancel. The final error may be less than any individual component errors. – Eric Postpischil Feb 18 '19 at 18:37
  • in the above case, you have the same error seven times in the same direction. – Nina Scholz Feb 18 '19 at 18:39
  • @EricPostpischil Not theoretically, but in practice you can do it, check next code: `let sum = 0; for (let i = 0; i < 7; i++) {sum += (1/7 * Math.pow(10, 17));} console.log(sum / Math.pow(10, 17));`. – Shidersz Feb 18 '19 at 18:43
  • @NinaScholz: Let x be the computed result of `1/7`. When start the sum `s = x+x`, there will be no error, as the result is 2x, which is exactly representable in binary floating-point. When we continue with `s += x`, the sum will not necessarily be exactly 3x. This is because 3x may not be representable in the floating-point format. If the low bit of x is odd (the bit in the least significant position in the significand is odd), then x has 53 significant bits, and 2x has 53 significant bits, and their sum, 3x, has 54 significant bits, which does not fit in the 53-bit significand… – Eric Postpischil Feb 18 '19 at 18:44
  • … In that case, it must be rounded. So the result of computing `s += x` will not produce exactly 3x: It is subject to another error. As we go on, adding x over and over again, we accumulate additional rounding errors. – Eric Postpischil Feb 18 '19 at 18:45
  • @Shidersz: (a) That will not fix the rounding error that occurs in calculating 1/7 in the first place. (b) Although that will produce integers, the integers will be so large that they cannot be added exactly in the floating-point format either, so there will still be rounding errors in doing arithmetic with them. – Eric Postpischil Feb 18 '19 at 18:46

0 Answers0