16

Why is the output different when adding same numbers?

public class Test {

    public static void main(String a[]) {

        double[] x = new double[]{3.9, 4.3, 3.6, 1.3, 2.6};
        System.out.println(">>>>>>> " + sum(x));
    }

    public static double sum(double[] d) {

        double sum = 0;
        for (int i = 0; i < d.length; i++) {
            sum += d[i];
        }
        return sum;
    }
}

Output is : 15.7

and if I interchange values

double[] x = new double[] {2.6, 3.9, 4.3, 3.6, 1.3};

I am getting Output as : 15.700000000000001

How do I get the same Output ?

Navin
  • 3,681
  • 3
  • 28
  • 52
LMK
  • 2,882
  • 5
  • 28
  • 52
  • 11
    No offense but why so many upvotes? Just another floating-point question. The answer is the same every time. – Radiodef Jan 27 '14 at 05:57

4 Answers4

12

Floating point numbers lose precision as you do more operations. Generally, you get the highest precision by adding the smallest numbers first. (So the result does depend on the order of operations)

In addition to maintaining the same order of operations, you'll also have to use strictfp to get the same result on different platforms.

Or better yet, don't use floating points: use a BigDecimal instead.

Navin
  • 3,681
  • 3
  • 28
  • 52
  • Can you explain why it loses precision when we do more operations? and Why we get the highest precision by adding the smallest numbers first? – Keerthivasan Jan 27 '14 at 05:59
  • @Octopus The error accumulates. See: [Propagation Of Error](https://en.wikipedia.org/wiki/Propagation_of_uncertainty). (I guess I should stop linking wikipedia, but I am a little lazy at the moment ;) – Navin Jan 27 '14 at 06:02
  • think on the 10/3 example. the result is 3.333333... but the computer needs to stop somewhere, so it stores information until a given precision. Then you add it to 10/3 again, and those discarded digits in the end will accumulate to the "error" of your operation, and this goes on – Leo Jan 27 '14 at 06:04
  • you don't have to go far. if you sum 0.1 nine times, it will already show you the error in the result. That's caused also by the floating point representation – Leo Jan 27 '14 at 06:05
  • @Leo Now that's something I don't see every day :) – Navin Jan 27 '14 at 06:07
  • @Octopus Ah, now my rep is 666. – Navin Jan 27 '14 at 06:09
  • 1
    Daring Devil's number ;) Move higher – Keerthivasan Jan 27 '14 at 06:11
  • “Precision” refers to the fineness with which numbers are represented, and it does not change with more operations. “Accuracy” refers to the closeness of a computed value to the ideal value. It is accuracy, not precision, that tends to decrease with more operations. – Eric Postpischil Jan 27 '14 at 10:39
  • 1
    `BigDecimal` does not eliminate accuracy problems. It changes their nature, it provides increased precision if desired, and it decreases performance. – Eric Postpischil Jan 27 '14 at 10:40
  • 1
    @EricPostpischil I wouldn't try to assign the two words definitions like that. I've read texts that basically use them interchangeably and a few times seen two texts that use them oppositely. – Radiodef Jan 27 '14 at 11:50
  • 1
    @Radiodef: These are long-established meanings used by professionals. Using them interchangeably is sloppy and obscures meaning. The two concepts are quite different, and we need different words to distinguish them. – Eric Postpischil Jan 27 '14 at 11:56
  • @EricPostpischil Well, as the float gets larger, the "fineness with which numbers are represented" _does_ become smaller. I think that is what I meant to say when I said more operations reduce precision. (although OP's problem is probably caused by accumulated roundoff) – Navin Jan 27 '14 at 18:06
3

At each step in a sequence of floating point arithmetic operations, the system has to produce a result that is representable in the floating point format. That may lead to rounding error, the loss of some information.

When adding two numbers of different magnitudes, the larger one tends to control which bits have to be dropped. If you add a large and a small number, many bits of the small number will be lost to rounding error, because of the large magnitude of the result. That effect is reduced when adding numbers of similar magnitude. Adding several small numbers first, leaving the large magnitude numbers to the end, allows the effect of the small numbers to accumulate.

For example, consider { 1e17, 21.0, 21.0, 21.0, 21.0, 21.0, 21.0, 21.0, -1e17 }. The exact answer, without any rounding, would be 147. Adding in the order shown above gives 112. Each addition of a "21.0" has to be rounded to fit in a number with magnitude around 1e17. Adding in ascending order of absolute magnitude gives 144, much closer to the exact answer. The partial result of adding the 7 small numbers is exactly 147, which then has to be rounded to fit in a number around 1e17.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
1

Simply adding up all the values will lead to a comparatively large error for longer arrays anyhow (or more precisely: the error will be "large" when the sum already is "large", and further "small" numbers should be added).

As one possibility to reduce the numerical error, you might consider the http://en.wikipedia.org/wiki/Kahan_summation_algorithm :

public static double kahanSum(double d[])
{
    double sum = 0.0;
    double c = 0.0;
    for (int i=0; i<d.length; i++)
    {
        double y = d[i] - c;
        double t = sum + y;
        c = (t - sum) - y;
        sum = t;
    }
    return sum;        
}
Marco13
  • 53,703
  • 9
  • 80
  • 159
0

because doubles and other floating-point data types have to deal with rounding issues when you perform operations. The precision is not infinite. If you divide 10/3, the result is 3.33333333... but the computer stores just part of this number.

check http://floating-point-gui.de/

Leo
  • 6,480
  • 4
  • 37
  • 52