16

Why don't applications typically use an integer datatype (such as int or long in C++/Java/C#) to represent currency values internally, as opposed to using a floating-point datatype (float, double) or something like Java's BigDecimal?

For example, if I'm writing a Java application and I have a variable that I want to represent an actual value in U.S. dollars (no need to represent fractions of pennies), I could declare an int value that represents the number of cents. For example, a value of "$1.00" would be represented as 100. This seems like a good alternative to using a double (see question Why not use Double or Float to represent currency?) or a BigDecimal (which is a more heavyweight object than a simple primitive int).

Obviously, the integer value would need to be "translated" (i.e. from 100 to "$1" or "$1.00") before displaying it to a user, or upon user input of a currency value, but this doing this doesn't seem significantly more burdensome than formatting a double or a BigDecimal for display.

Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?

Community
  • 1
  • 1
Jon Schneider
  • 25,758
  • 23
  • 142
  • 170
  • 1
    Or, perhaps better: Why don't applications use a data-type specifically designed for dealing with monetary units? (The YEN and USD have different "smallest denominations" and a silent assignment between currencies is most likely wrong.) I would guess it's the same reason why float/doubles are used: they provide the "precision and operations" (albeit with all the inherent issues) without needing to use a [proper] domain-specific type. Also, sometimes (e.g. stock tickers) it just doesn't matter. –  Mar 18 '11 at 18:22

3 Answers3

15

Why don't applications typically use [whole numbers] to internally represent currency values?

  1. It does not make for simple coding. $1.10 translates to 110¢. Okay, but what about when you need to calculate tax (i.e $1.10 * 4.225% - Missouri's tax rate which results in $0.046475). To keep all money in whole numbers you'd have to also convert the sales tax to a whole number (4225), which would require converting 110¢ further to 11000000. The math then can be 11000000 * 4225 / 100000 = 464750. This is a problem as now we have values in fractions of cents (11000000 and 464750 respectively). All this for the sake of storing money as whole numbers.

  2. Therefore, it's easier to think and code in terms of the native currency. In the United States, this would be in dollars with the cents being a decimal fraction (i.e. $1.10). Coding such in terms of 110¢ isn't as natural. Using base-10 floating point numbers (such as Java's BigDecimal, and .NET's Decimal) are usually precise enough for currency values (compared to base-2 floating point numbers like Float and Double).

Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?

I think number 1 above shows that it's hard to get away from needing to represent fractions of cents, at least when it comes to calculating sales tax - something common in business applications.

Matt
  • 14,353
  • 5
  • 53
  • 65
  • 1
    +1 because it talks about the fact that "bare" integer don't built-in have support for many required operations. '$1.10' is display-currency though. Mils are 1/1000$. Also, `BigDecimal` is *exact* (for a certain subset of operations / desired memory expenditure and offers customizable rounding modes) while `decimal` (.NET) is just a *really huge 128-bit relative precision number* that can just brute-force past a number of issues but still "suffers" from not being a fixed precision :) –  Mar 18 '11 at 20:20
  • 1
    Matt, can you clarify your point regarding "To keep all money in whole numbers"? I would represent tax rate as a floating-point value (as tax rate is not a currency value); then I would multiply my integer currency value by the double floating-point value to find my result, and accept any loss of precision as being fractions of pennies which I can discard. Assuming we do not have a requirement to track fractions of pennies for the purposes of sales tax to be remitted to the state -- might be an issue, but assume there are viable workarounds -- is there any other downside to this approach? – Jon Schneider Mar 18 '11 at 21:18
  • 1
    @Jon, I was just trying to give an example of how it might not always be simple and the things you would have to go through to not ever loose precision (if that is desired since using floats is commonly discouraged due to loose of precision). However, doing intermediate math with floats makes sense for sales tax calculations, and would probably be more common. – Matt Mar 18 '11 at 21:35
8

Integer types

It's a bad idea to use most integer data types to represent currencies, because of:

  • a very limited representable value range in respect to common applications;
  • imposing extra burden on handling fractional values.

Specifically, the limited value range may be a serious problem with shorter integer type. Lets consider a common 32-bit signed integer (usually a int):

  • the value range is from approx. -2.15 billion to +2.15 billion — this itself is not an option for just any accounting / banking / serious finance use;
  • when just two last digits are used to represent the fractional part, the range shrinks to -21.5 million to +21.5 million;
  • in case multiplication shall work (not speaking about mixed-precision calculations), the range is scaled down even more.

With a 64-bit signed integer (usually a long) you can count up to 92 thousand trillion. When thinking about the global economy, money is being counted in trillions — hence this is neither a reasonable option.

Floating-point types

It's a bad idea to use floating-point data types, because they are imprecise in nature, which is fatal problem for the vast majority of monetary calculations.

Suitable data types

It's a very good idea to use fixed-point or decimal data types, because they usually don't have the negative properties as floating-point and integer data types:

  • representable value range is broad enough;
  • precision can be adjusted by rounding in respect to calculation requirements;
  • no confusion thanks to natural fractional values handling;
  • precise decimal number representation.

Last but not least, the suitable data type heavily depends on the language and its capabilities.

Other problems

Also, in many calculation scenarios it is necessary to use different precision for intermediate calculations and for the resulting values. While results usually have to be represented with the precision defined for the particular currency by the respective law, intermediate calculations may involve higher-precision intermediate results. Examples are percentage calculations in loan payments, insurance costs, etc., or currency conversions where exchange rates are frequently given in higher precision.

Multi-currency softwares also need to deal with the fact that different currencies have different lawful precisions. Rounding may be imposed by accounting standards.

Ondrej Tucny
  • 27,626
  • 6
  • 70
  • 90
  • Ondrej, good point about intermediate calculations. With an int representing currency, you would effectively forcing a round to the nearest penny after each and every step of a multi-step calculation, which may not be requirements call for. Can you clarify / elaborate on your other point regarding "very limited representable range"? – Jon Schneider Mar 18 '11 at 21:22
  • Edited my answer to explain more. – Ondrej Tucny Mar 18 '11 at 21:55
  • If a program is going to be used to calculate world economic data accurate to the penny, a 64-bit integer may not suffice, but how many programs are going to be used for that? In most financial calculations, even 32-bit integers would be sufficient for storing most values if one could accommodate intermediate calculations spilling out to 64 bits before being scaled back down. – supercat Nov 18 '12 at 15:32
  • @supercat That's "640 KB will be enough for everyone" logic :-) – Ondrej Tucny Nov 21 '12 at 03:37
  • @OndrejTucny: I wasn't advocating the use of 32-bit values for financial calculations (since they're barely adequate), but rather suggesting that if 32 bits are in the right ballpark of what's required, 64-bit variables, which can handle numbers that are four billion times bigger, should suffice. Incidentally, with regard to the "640K" remark, even if Mr. Gates actually said it, it's worthwhile to note that (1) the choice faced by the PC engineers wasn't between 640K or unlimited directly-accessible memory; it was probably between 512K, 640K, or 768K. Not much difference, really. – supercat Nov 21 '12 at 15:33
  • (2) Once memory became affordable, Microsoft broke that barrier as effectively as it could be. With regard to storing monetary amounts, if the currency degrades to the point that a business for whom a non-optimized database would be suitable does more than a trillion dollars worth of business (the limit for 64-bit types, if the basic unit is $0.000001), the ability of accounting software to handle such numbers will be the least of the business' concerns. – supercat Nov 21 '12 at 15:50
1

I believe gnucash uses a rational representation, storing a numerator and denominator. I don't have the data to say what the best or common practice is. Floating point has the advantage of expediency and the disadvantage that it's imprecise. Personally I would not use floating point.

Andy
  • 4,789
  • 23
  • 20