First of all, using double
for monetary amounts is risky.
TL;DR
I'd recommend to stay below $17,592,186,044,416
.
The floating-point representation of numbers (double
type) doesn't use decimal fractions (1/10, 1/100, 1/1000, ...), but binary ones (e.g. 1/128, 1/256). So, the double
number will never exactly hit something like $1.99
. It will be off by some fraction most of the time.
Hopefully, the conversion from decimal digit input ("1.99") to a double
number will end up with the closest binary approximation, being a tiny fraction higher or lower than the exact decimal value.
To be able to correctly represent the 100 different cent values from $xxx.00
to $xxx.99
, you need a binary resolution where you can at least represent 128 different values for the fractional part, meaning that the least significant bit corresponds to 1/128 (or better), meaning that at least 7 trailing bits have to be dedicated to the fractional dollars.
The double
format effectively has 53 bits for the mantissa. If you need 7 bits for the fraction, you can devote at most 46 bits to the integral part, meaning that you have to stay below 2^46 dollars ($70,368,744,177,664.00
, 70 trillions) as the absolute limit.
As a precaution, I wouldn't trust the best-rounding property of converting from decimal digits to double
too much, so I'd spend two more bits for the fractional part, resulting in a limit of 2^44 dollars, $17,592,186,044,416
.
Code Warning
There's a flaw in your code:
return (long) (amountUsd * 100.0);
This will truncate down to the next-lower cent if the double
value lies between two exact cents, meaning that e.g. "123456789.23" might become 123456789.229...
as a double
and getting truncated down to 12345678922
cents as a long
.
You should better use
return Math.round(amountUsd * 100.0);
This will end up with the nearest cent value, most probably being the "correct" one.
EDIT:
Remarks on "Precision"
You often read statements that floating-point numbers aren't precise, and then in the next sentence the authors advocate BigDecimal
or similar representations as being precise.
The validity of such a statement depends on the type of number you want to represent.
All the number representation systems in use in today's computing are precise for some types of numbers and imprecise for others. Let's take a few example numbers from mathematics and see how well they fit into some typical data types:
42
: A small integer can be represented exactly in virtually all types.
1/3
: All the typical data types (including double
and BigDecimal
) fail to represent 1/3
exactly. They can only do a (more or less close) approximation. The result is that multiplication with 3 does not exactly give the integer 1
. Few languages offer a "ratio" type, capable to represent numbers by numerator and denominator, thus giving exact results.
1/1024
: Because of the power-of-two denominator, float
and double
can easily do an exact representation. BigDecimal
can do as well, but needs 10 fractional digits.
14.99
: Because of the decimal fraction (can be rewritten as 1499/100
), BigDecimal
does it easily (that's what it's made for), float
and double
can only give an approximation.
PI
: I don't know of any language with support for irrational numbers - I even have no idea how this could be possible (aside from treating popular irrationals like PI and E symbolically).
123456789123456789123456789
: BigInteger
and BigDecimal
can do it exactly, double can do an approximation (with the last 13 digits or so being garbage), int
and long
fail completely.
Let's face it: Each data type has a class of numbers that it can represent exactly, where computations deliver precise results, and other classes where it can at best deliver approximations.
So the questions should be:
- What's the type and range of numbers to be represented here?
- Is an approximation okay, and if yes, how close should it be?
- What's the data type that matches my requirements?