1

I always hear people talk about that the best money/currency representation is to use a higher enough precision floating point type like double.

I don't quite get it, double is just a type of floating point number which uses 52 bits for mantissa and 11 bits for exponent as: enter image description here

I know double is better than float, but if we use double to represent money in financial applications, isn't that going to be serious consequences? image this:

double d = ... // d is a very very large number to represent a very rich guy's deposit
double sum = d + 1;  //  the guy deposit another $1 into the account

since $1 is very small and it is going to be rounded so technically d is the same value as sum, isn't it a very serious consequence, people make deposit, the sum stay the same?

  • 7
    Who said `double` is better for money representation? Do you have any sources/arguments? – Pierre Jul 03 '20 at 12:38
  • 11
    Wait. Who says it's a good idea? https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency – StoryTeller - Unslander Monica Jul 03 '20 at 12:39
  • 3
    Using any floating-point is very, very bad. Money is one of the few things where it's not acceptable to have imprecision and rounding issues. – Thomas Jager Jul 03 '20 at 12:40
  • 1
    Your observations are correct, and whoever says that it's okay to use double in such an application is just plain wrong (and will hopefully never write actual software dealing with money). – Felix G Jul 03 '20 at 12:56
  • 3
    There is no fixed-size format that can handle all monetary arithmetic. Not integer, not floating-point, not fixed point, not rational, nothing. **Every** format with a fixed number of bits has limits: It can represent only a finite number of numbers, so, in some circumstances, it **must** overflow, underflow, round, truncate, or otherwise approximate or break. Integer and fixed-point arithmetic breaks with overflow or with compound interest or fractions outside its base. Floating-point breaks with rounding issues. Proper engineering is to know the issues and design correctly, in any format. – Eric Postpischil Jul 03 '20 at 12:56
  • 1
    There are cases where integer or fixed-point formats are useful for money, as in simple accounting. There are cases where floating-point formats are useful, as in computing compound interest or evaluating stock options models. Sometimes a correct solution is to compute with floating-point and later convert to fixed-point. – Eric Postpischil Jul 03 '20 at 12:59
  • 1
    Regarding your specific example of adding 1 failing because there is no change in the sum, this occurs only when 2^53 units of currency are reached. If the unit is cents, this requires 2^53/100 cents = about 90 trillion dollars. So that will not occur in any single-company or single-individual accounting (until, say, Apple’s market value increases 60-fold). It could occur in government and global-economy calculations. – Eric Postpischil Jul 03 '20 at 13:01
  • 1
    [This write-up discusses the topic in detail](https://www.evanjones.ca/floating-point-money.html), and it seems to support more the flavor of what @EricPostpischil is saying than what the other comments are suggesting. – ryyker Jul 03 '20 at 13:05
  • 2
    @ThomasJager: Re “it's not acceptable to have imprecision and rounding issues”: The IRS says it is okay to round the numbers I report on my tax return, to the nearest dollar. I have seen banks and brokerages slip a penny in some calculations. Many economic things, such as depreciation, are only estimates or models anyway, and the precision of the IEEE-754 binary64 format far exceeds the accuracy of the estimates. – Eric Postpischil Jul 03 '20 at 13:12
  • 1
    Does this answer your question? [Why not use Double or Float to represent currency?](https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency) (Actually, there is no _answer_ to this question, only opinions. But the topic here is a repeat.) – ryyker Jul 03 '20 at 13:13
  • How does this sound: Know what the accounting rules are for the transaction(s) you are performing in your code, and use the specific characteristics of floating point, fixed-point, and/or integer operations on your system to implement those accounting rules. In other words, properly handling monetary transactions on a computer is ***HARD***. You need to know what the accounting rules are (and they will change for difference jurisdictions), and the exact characteristics of every number format available to you. The main issue with floating point is rounding can fail in unexpected ways. – Andrew Henle Jul 03 '20 at 13:21
  • 1
    To answer your question "*Why is it good to use double to represent money in C?*", The answer is: It isn't, so there is also no reasoning possible. It isn't good to use any floating-point type at all to represent currency and precise values. Use integer types to represent the digits in the fraction part. Look at the answers to the duplicate question. – RobertS supports Monica Cellio Jul 03 '20 at 14:01

0 Answers0