Why don't applications typically use an integer datatype (such as int
or long
in C++/Java/C#) to represent currency values internally, as opposed to using a floating-point datatype (float
, double
) or something like Java's BigDecimal
?
For example, if I'm writing a Java application and I have a variable that I want to represent an actual value in U.S. dollars (no need to represent fractions of pennies), I could declare an int
value that represents the number of cents. For example, a value of "$1.00" would be represented as 100. This seems like a good alternative to using a double
(see question Why not use Double or Float to represent currency?) or a BigDecimal
(which is a more heavyweight object than a simple primitive int
).
Obviously, the integer value would need to be "translated" (i.e. from 100 to "$1" or "$1.00") before displaying it to a user, or upon user input of a currency value, but this doing this doesn't seem significantly more burdensome than formatting a double
or a BigDecimal
for display.
Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?