I've implemented a fixed Decimal class but I have an overflow problem caused by a propagation of precision when dividing.
At a high level the decimal is represented by:
template<class MANTISSA>
struct Decimal
{
// Example
// 7.14
MANTISSA mantissa_; // 714
uint8_t exponent_; // 2
};
However, I have a fundamental design flaw. My divide operator keeps calculating the remainder until it's zero or we run out of digits. The latter scenario occurs more-frequently than I realized.
I then multiply the output of divide by another decimal:
Decimal operator*(const Decimal& rhs) const
{
Decimal result = *this;
result.mantissa_ *= rhs.mantissa_;
result.exponent_ += rhs.exponent_;
return result;
}
There will most-likely be an overflow here because the mantissa from the divide output already has max digits (18 for int64_t
).
To solve this I thought I could detect the overflow and reduce the multiplicands. I changed operator*()
to check whether the exponents sum beyond 18. However, this is a bug. The exponents might be less than 18 but the mantissas can still be large enough to overflow.
To solve this properly I would need to count the number of mantissa digits prior to multiplying. And this requires a while loop, divide the mantissa by 10 until you reach zero and do this for both multiplicands- very expensive! Surely this cannot be the best approach??
At this point I have taken a step back because things seem complicated, How should I handle this? Do I fix the problem at source, don't let divide result in so many digits? Assuming so, do I require the user configure how many digits divides should output? Should the precision of divide output depend on the magnitude/exponent of the two input arguments?
Can anyone advise?