7

When BigDecimal is used with an input of double and BigDecimal with an input of String different results seem to appear.

BigDecimal a = new BigDecimal(0.333333333);
BigDecimal b = new BigDecimal(0.666666666);

BigDecimal c = new BigDecimal("0.333333333");
BigDecimal d = new BigDecimal("0.666666666");

BigDecimal x = a.multiply(b);
BigDecimal y = c.multiply(d);

System.out.println(x);
System.out.println(y);

x outputs as

0.222222221777777790569747304508155316795087227497352441864147715340493949298661391367204487323760986328125

while y is

0.222222221777777778

Am I wrong in saying that this is because of double imprecision? But since this is a BigDecimal, shouldn't it be the same?

dardeshna
  • 219
  • 2
  • 9
  • have a look on a,b and c,d ... this is already different ;) have a look on http://stackoverflow.com/questions/12395281/convert-double-to-bigdecimal-and-set-bigdecimal-precision – pL4Gu33 Apr 14 '15 at 16:21
  • 1
    This is because `double` has limited precision; That's IEEE 754 for you. – fge Apr 14 '15 at 16:22
  • 1
    you can correct it by using `.setScale(10, BigDecimal.ROUND_HALF_UP);` – SnakeDoc Apr 14 '15 at 16:24

4 Answers4

13

Am I wrong in saying that this is because of double imprecision?

You are absolutely right, this is exactly because of double's imprecision.

But since this is a BigDecimal, shouldn't it be the same?

No, it shouldn't. The error is introduced the moment you create new BigDecimal(0.333333333), because 0.333333333 constant already has an error embedded in it. At that point there is nothing you can do to fix this representation error: the proverbial horse is out of the barn by then, so it's too late to close the doors.

When you pass a String, on the other hand, the decimal representation matches the string exactly, so you get a different result.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
6

Yes, this is floating point error. The problem is that the literals 0.333333333 and 0.666666666 are represented as doubles before being passed as an argument to BigDecimal --- notably, BigDecimal's constructor takes a double as an argument.

This is supported by the standard, which says that floating point literals default to double unless otherwise specified.

Patrick Collins
  • 10,306
  • 5
  • 30
  • 69
  • It is not a floating point _error_. The floating point code is behaving exactly the way it is intended to behave. The only problem is with the programmer's _expectation_ of how it will behave. The "problem" is that `0.333333333` does not represent the same rational number that is represented by `new BigDecimal("0.333333333")`. But, that is not a bug, and it can't be "fixed" except by changing the Java Language Specification such that `0.333333333` is no longer interpreted as an IEEE binary double. – Solomon Slow Apr 14 '15 at 16:33
  • @jameslarge I'm aware of how IEEE binary doubles work --- the imprecision that results from using them is typically referred to as "error." See, for example, [wikipedia](http://en.wikipedia.org/wiki/Floating_point#Machine_precision_and_backward_error_analysis), or the [Oracle docs](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). This is error in the sense of [statistical error](http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics), not `FileNotFoundError`. – Patrick Collins Apr 14 '15 at 20:57
  • Oh,... right... _that_ kind of error. I guess I've been kind of preoccupied with software defects lately (please, don't ask me why), and I'm starting to see them even where they are not. – Solomon Slow Apr 14 '15 at 21:23
5

Java docs has its answer. According to Java docs of BigDecimal(double val)

The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double.

Masudul
  • 21,823
  • 5
  • 43
  • 58
0

When you define a double variable in any way, in most cases it won't be the value you have defined, but the closest possible binary representation. You are passing a double to the constructor, so already providing that small imprecision.