I always hear people talk about that the best money/currency representation is to use a higher enough precision floating point type like double
.
I don't quite get it, double
is just a type of floating point number which uses 52 bits for mantissa and 11 bits for exponent as:
I know double
is better than float
, but if we use double
to represent money in financial applications, isn't that going to be serious consequences? image this:
double d = ... // d is a very very large number to represent a very rich guy's deposit
double sum = d + 1; // the guy deposit another $1 into the account
since $1 is very small and it is going to be rounded so technically d
is the same value as sum
, isn't it a very serious consequence, people make deposit, the sum stay the same?