When operations are performed on BigDecimal
, the number of digits in the result will frequently be larger than either operand. This has two major effects:
Unless code forces periodic rounding, operations on BigDecimal
will get slower and slower as the numbers get longer and longer.
No fixed-size container can possibly be big enough to accommodate a BigDecimal
, since many operations between two values which filled up their respective containers would yield a result too long to fit into a container of that size.
The fundamental reason that float
and double
can be fast, while BigDecimal
cannot, is that they are defined as lopping off as much precision as is necessary in any calculation so as to yield a result which will fit in the same size of container as the original operands. This enables them to use fixed-size containers, and not have to worry about succeeding operations becoming progressively slower.
Incidentally, another major (though less fundamental) reason that BigDecimal
is slow is that values are represented using a binary-formatted mantissa but a decimal exponent. Consequently, any operations which would require adjusting the precision of their operands must be preceded by a very expensive "normalization" step. The type might be easier to work with if any given value had exactly one representation, and thus adding 123.456 to 0.044 yielded 123.5 rather than 123.500, but normalizing 123.500 to 123.5 would require much more computation than adding of 123.456 and 0.44; further, if that result is added to another number with three significant figures after the decimal point, the normalization performed after the earlier addition would increase the time required to perform the next one.