0

Here is the code:

    public static void main(String[] args) {
    final double d1 = 811.440000;
    final double d2 = 425.530000;
    final double d3 = 384.270000;

    for (double d : Arrays.asList(d1, d2, d3)) {
        final String dstr = String.format("%f", d);
        BigDecimal bg1 =  BigDecimal.valueOf(d).setScale(2, BigDecimal.ROUND_DOWN);
        BigDecimal bg2 = (new BigDecimal(dstr)).setScale(2, BigDecimal.ROUND_DOWN);
        BigDecimal bg3 =    (new BigDecimal(d)).setScale(2, BigDecimal.ROUND_DOWN);
        System.out.printf("[%s : %f] {%f, %f} %f\n", dstr, d,     bg1, bg2,   bg3);
    }
}

Here is the output:

[811.440000 : 811.440000] {811.440000, 811.440000} 811.440000
[425.530000 : 425.530000] {425.530000, 425.530000} 425.520000
[384.270000 : 384.270000] {384.270000, 384.270000} 384.260000

Why don't we change valueOf(double) method or the BigDecimal(double) constructor of the BigDecimal class so as to get a consistent result?

2 Answers2

1

slightly change the code:

    public static void main(String[] args) {
    final double d1 = (new Double("811.44")).doubleValue();
    final double d2 = (new Double("425.53")).doubleValue();
    final double d3 = (new Double("384.27")).doubleValue();
    final double DD = (new Double("999999999999999.95")).doubleValue();  // 15 9s before decimal point

    for (double d : Arrays.asList(d1, d2, d3)) {
        final String dstr = String.valueOf(d);
        BigDecimal bg1 =  BigDecimal.valueOf(d);
        BigDecimal bg2 = (new BigDecimal(dstr));
        BigDecimal bg3 =    (new BigDecimal(d));
        System.out.printf("* [%s : %s : %.15f] {%.15f, %.15f, %.15f}\n", dstr, d, d, bg1, bg2, bg3);
        System.out.printf("  [%s : %s : %.15f] {%.15f, %.15f, %.15f}\n", dstr, d, d, bg1.doubleValue(), bg2.doubleValue(), bg3.doubleValue());
    }
}

And this is the result:

1000000000000000.000000000000000
x [811.44 : 811.44 : 811.440000000000000] {811.440000000000000, 811.440000000000000, 811.440000000000055}
  [811.44 : 811.44 : 811.440000000000000] {811.440000000000000, 811.440000000000000, 811.440000000000000}
x [425.53 : 425.53 : 425.530000000000000] {425.530000000000000, 425.530000000000000, 425.529999999999973}
  [425.53 : 425.53 : 425.530000000000000] {425.530000000000000, 425.530000000000000, 425.530000000000000}
x [384.27 : 384.27 : 384.270000000000000] {384.270000000000000, 384.270000000000000, 384.269999999999982}
  [384.27 : 384.27 : 384.270000000000000] {384.270000000000000, 384.270000000000000, 384.270000000000000}

From the output, it seems the lose of precision is introduced by the BigDecimal(double) constructor.

0

The problem here isn't that one of new BigDecimal(double) nor new BigDecimal(String) working false.

The problem here is, that double are not precise. They store their bits in a way, which can't represent all numbers.

Here are some links about that topic:

akop
  • 5,981
  • 6
  • 24
  • 51