Unfortunately in these cases one has to use BigDecimal instead of double
.
Doubles are a sum of (negative) powers of 2 (bits!), that is a decimal representation like 0.10 can only be approximated: 1/16 + 1/32 + ... . When calculating the deviation enlarges and becomes visible.
To use doubles, one would constantly have to use formatted output, DecimalFormat, (even a good thing for locales which use a decimal comma and for thousand separators). But one would need also constantly round things. And given tax and other laws, sometimes to 6 decimals and so on.
BigDecimal is an immutable value type. It needs a scale/precision to set, And has awkward add
, subtract
and multiply
methods.
new BigDecimal("0.10"); // Works perfectly.
new BigDecimal(0.10); // Should not be used, looses precision 2,
// and has epsilon error.
And then there is the database side. Depending on the database system DECIMALS might be preferred above DOUBLE.
(The third alternative would be to use cents, long
but that is circumstantial too.)