10

Currently learning C++ and this has just occurred to me. I'm just curious about this as I'm about do develop a simple bank program. I'll be using double for calculating dollars/interest rate etc., but there are some tiny differences between computer calculations and human calculations.

I imagine that those extra .pennies in the real world can make all the difference!

Bo Persson
  • 90,663
  • 31
  • 146
  • 203

4 Answers4

14

In many cases, financial calculations are done using fixed-point arithmetic instead of floating point.

For example, the .NET Decimal type, or the VB6 Currency type. These are basically just integer types, where everyone has agreed that the units are some fraction of a cent, like $.0001.

And yes, some rounding has to occur, but it is done very systematically. Usually the rounding rules are somewhere deep in the fine print of your contract (the interest rate is x%, compounded every T, rounded up to the nearest penny, but not less than $y every statement period).

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
  • I agree, very accurate calculations for game co-ordinates are often done with integer calculations too. Floating point calculations always have some level of rounding errors. – Chris Snowden Jun 18 '12 at 16:12
  • Didn't think of it that way, cheers. I'm having a look at fixed-point arithmetic. Another thought that occurred is whether banks use rounding in their favour and if it generates massive amount of profits (given large number of transactions occurring all the time). –  Jun 18 '12 at 16:18
  • 2
    @Kurzon: "ordinary" transactions don't have any rounding errors, since when I transfer `$1.23` to you, my balance goes down by exactly `123c` and yours goes up by exactly `123c`: the fixed-point takes care of it. It does apply to things like interest and commissions, where you multiply a quantity of money by some percentage that might have a lot of decimal places. But like Ben says the rules are defined by a contract or law somewhere. If you're a high-velocity arbitrage trader, maybe you care about rounding rules. Other customers probably not so much. – Steve Jessop Jun 18 '12 at 16:34
  • Decimal is a floating point type not fixed. It is however, as the name suggests, floating point decimal. With floating point decimal the number 0.1, for example, is represented exactly. With float, however , 0.1 cannot be represented exactly. Typically, in banking, for performance reasons, the double type is scaled up and used, despite it being a binary floating point type. This comes with the caveat the developers really have to know what they are doing with precision and scale. – Frank Nov 20 '22 at 11:44
2

The range of a 8 byte long long is: -9223372036854775808 max: 9223372036854775807 do everything as thousands of a cent/penny and you still can handle numbers up to the trillions of dollars/pounds/whatever.

David Allan Finch
  • 1,414
  • 8
  • 20
1

It depends on the application. All calculations with decimals will require rounding when you output them as dollars and cents (or whatever the local currency is): the base price of an article may only have two digits after the decimal, but when you add on sales tax or VAT, there will be more, and if you need to calculate interest on an investment, there will be more.

Generally, using double results in the most accurate results, however... if your software is being used for some sort of bookkeeping required by law (e.g. for tax purposes), you may be required to follow standard accepted rounding practices, and these are based on decimal arithmetic, not binary, hexadecimal or octal (which are the usual bases for floating point—binary is universal on everything but mainframes). In such cases, you'll need to use some sort of Decimal class, which ensures the correct rounding. For other uses (e.g. risk analysis), double is fine.

James Kanze
  • 150,581
  • 18
  • 184
  • 329
0

Just because a number is not an integer does not mean that it cannot be calculated exactly. Consider that a dollars-and-cents value is an integer if one counts the number of pennies (cents), so it is a simple matter for a fixed-point library using two decimals of precision to simply multiply each number by 100, perform the calculation as an integer, and then divide by 100 again.

Ether
  • 53,118
  • 13
  • 86
  • 159