5

I am currently working on stock market related project using c++, involving a lot float type like prices and indexes.

I read a lot says that you should use decimal float in money related arithmetic. Why not use Double or Float to represent currency? Difference between decimal, float and double in .NET?

To my understanding, the difference between float and decimal float is at what base the exponent part is interpreted, float use 2 as base and decimal float use 10. When using decimal float you still got rounding errors, you still could not express 1/3(correct me if I am wrong), I guess it's quite possible to multiply someone's account balance by 30% and then the round error occurs, after a few more calculations, the rounding error might propagate even more serious. Besides a bigger number range, why should I use decimal float in financial arithmetic?

Community
  • 1
  • 1
Jason
  • 259
  • 1
  • 3
  • 12
  • 1
    Because you can't represent 0.01 in binary. – Mysticial Nov 05 '14 at 18:53
  • Note: In 0.1, 0.2, 0.3, ..., 0.9 only 0.5 is an exact binary floating point number, while all are exact decimal floating point numbers. –  Nov 05 '14 at 18:54
  • 1
    The best thing you can do is using pure integer-arithmetic: Don't calculate in dollars, but in cents (or tenth of cents, if your accounts are in that precision). – Deduplicator Nov 05 '14 at 18:58
  • There may also be an efficiency issue. Most floating point values are stored in 3 pieces. These three pieces have to be extracted, processed and then recombined. That's two extra operations compared with integers. Integers don't have packing issues. – Thomas Matthews Nov 05 '14 at 20:51
  • FWIW, taking 30% of a decimal monetary value will not have any rounding issues. Monetary values do not go deep down in the lowest bits, and decimals can exactly represent percentages. – Rudy Velthuis Nov 06 '14 at 13:35

2 Answers2

10

Depending on what financial transactions you're performing, rounding errors are likely to be inevitable. If an item costs $1.50 with 7% sales tax, you aren't going to be charged $1.605; the price you pay will be either $1.60 or $1.61. (US currency units theoretically include "mils", or thousandths of a dollar, but the smallest denomination coin is $0.01, and almost all transactions are rounded to the nearest cent.)

If you're doing simple calculations (just adding and subtracting quantities and multiplying them by integers), all the results will be whole numbers of cents. If you use binary floating-point representing the number of dollars, most amounts will not be representable; a calculation that should yield $0.01 might yield $0.01000000000000000020816681711721685132943093776702880859375.

You can avoid that problem by using integers to represent the number of cents (or, equivalently, using fixed-point if the language supports it) or by using decimal floating-point that can represent 0.01 exactly.

But for more complex operations, like computing 7% sales tax, dividing a sum of money into 3 equal parts, or especially compound interest, there are still going to be results that aren't exactly representable unless you use an arbitrary-precision package like GMP.

As I understand it, there are laws and regulations that specify exactly how rounding errors are to be resolved. If you apply 7% sales tax to $1.50, you can't legally pick between $1.60 and $1.61; the law tells you exactly which one is legally correct.

If you're writing financial software to be used by other people, you need to find out exactly what the regulations say. Once you know that, you can determine what representation (integers, fixed-point, decimal floating-point, or whatever) can best be used to get the legally required results.

(Disclaimer: I do not know what these regulations actually say.)

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
  • 1
    I'm imagining a multi-million dollar lawsuit because a program incorrectly rounded an invoice up to $1.61 instead of to $1.60. – Mysticial Nov 05 '14 at 20:05
  • 5
    @Mysticial: I'm imagining a multi-million dollar lawsuit because a program incorrectly rounded an invoice up to $1.61 instead of to $1.60 *a billion times*. – Keith Thompson Nov 05 '14 at 20:35
1

At least in the USA, most financial type companies are required to use decimal based math. Mainframes since the days of the IBM 360 can perform math on variable length strings of packed decimal. Typically some form of fixed point numbers are used, with a set number of digits after the decimal point. High level languages like Cobol support packed (or unpacked) decimal numbers. In the case of IBM mainframes, there's a lot of legacy assembly code to go along with the Cobol code, partly because at one time certain types of databases were accessed via macros in assembly (now called HLASM - high level assembly).

rcgldr
  • 27,407
  • 3
  • 36
  • 61