997

I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision. My question is when should a use a double and when should I use a decimal type? Which type is suitable for money computations? (ie. greater than $100 million)

HerbalMart
  • 1,669
  • 3
  • 27
  • 50
Soni Ali
  • 18,464
  • 16
  • 44
  • 53
  • 8
    Do you want fractions of cents? (like at gas stations) – Daniel F. Thornton Jul 22 '09 at 14:39
  • http://stackoverflow.com/questions/803225/when-should-i-use-double-instead-of-decimal – AaronS Jul 22 '09 at 14:44
  • 4
    There's actually a fairly answer: decimal works like a long and an int (it's an integral type!), but it has a dot somewhere in its syntax and output format (see http://en.wikipedia.org/wiki/Integer_(computer_science) ). Double and float work with a mantissa and an exponent (see http://en.wikipedia.org/wiki/Floating_point ). That's it. – atlaste Apr 04 '14 at 09:24

7 Answers7

1205

For money, always decimal. It's why it was created.

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.

If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".

David
  • 24,700
  • 8
  • 63
  • 83
  • 52
    It's not that double is inaccurate - it has *relative* accuracy and can represent very large or small magnitudes that decimal cannot handle at all. – Michael Borgwardt Jul 22 '09 at 15:14
  • 98
    Here's why you use Decimal for money: Double's accuracy is only 16 decimal digits, and after just a few arithmetic ops, errors will quickly accumulate large enough to creep into the 15, 14, 13, etc. digits. Rounding to "cents" requires at least one digit of full accuracy after cents digit, but really you should reserve 4 or 5 to insulate from cumulative arithmetic errors, which you CANNOT allow to corrupt the hundredths column you use to round the cents. That leaves you with 16 (total) - 2 (cents) - (4 or 5 error padding) = oh $hit only 7 (or less) reliable integer digits for your money! – Triynko Mar 21 '12 at 22:01
  • 26
    As a result, I wouldn't manipulate monetary values of more than $9.99 (1 integer digit), because rather than 4 or 5 digits of error accumulation padding, I'd want more like 10 or 11. Since Decimal is a 128-bit number, it gives you that kind of isolation, even with numbers in the hundreds of trillions of dollars, because it has 28-29 digits of accuracy. However, you can't go much higher than that. 999,999,999,999,999.99R (999 trillion) would require 18 digits of accuracy to round properly, and since decimal gives you 28-29, that's only 10 digits of cumulative arithmetic error insulation. – Triynko Mar 21 '12 at 22:15
  • 63
    Just to rub it in... if you were building a game, would you really care if the barrel of explosives you just catapulted a quarter mile across a field lands a 1/16 of an inch off target because of the cumulative errors over the hundreds of "position + (velocity * time)" steps? I doubt it. – Triynko Mar 21 '12 at 22:21
  • 26
    To clear this up double does not have 16 digits - that is only the number of *meaningful* digits. Floats are based around exponents in base 2 math - some base 10 numbers are corrupted because they are an infinite series if converted to a base 2 exp, in binary float math `0.1 * 0.1 != 0.01` because 0.1 cannot be represented exactly. Math operations also lead to drift - add and subtract with dollars and cents and you can get numbers like 0.9999999999999. toString() initially hides this through rounding, but exact comparisons are broken immediately. – David Mar 27 '12 at 12:36
  • 14
    @Triynko : Note that decimal is base 10, which means it can represent monetary values like $0.30 *exactly*. This means that as long as you do only additions, subtractions and multiplication with integers, you will not have any round off errors at all. So it doesn't have 28 digits of accuracy, it has perfect accuracy. This is a huge difference compared to binary floating point, like double, which cannot represent $0.30 exactly. – avl_sweden Dec 12 '12 at 10:49
  • 1
    I could recognize the value in having a fixed-precision decimal type, but I don't like the auto-floating behavior of `Decimal`. If fifty people each buy one object that's "3/$1", the total collected should probably not be $16.67, but more likely either $16.50 or $17.00, depending upon whether the customers are charged $0.33 or $0.34. Although `Decimal` is large enough that significant loss of precision to rounding is unlikely to occur, it has no way of knowing how many digits are "important" or warning when rounding errors occur. – supercat May 02 '13 at 22:05
  • "For money, always decimal", not true - see here, in particular Jeffrey Hantin's answer: http://stackoverflow.com/questions/2604003/for-money-always-decimal – cbp May 16 '14 at 06:37
  • 3
    @cbp - the question you linked to is about VB library, and VB doesn't have decimal. It's completely unrelated to this discussion. – Davor Aug 05 '14 at 07:52
  • @Davor VB does have a decimal type (http://msdn.microsoft.com/en-au/library/xtba3z33.aspx), and VB6 has the Currency data type which is equivelant. Jeffrey's answer is perfectly relevant to the discussion: "Financial functions use floating-point for a few reasons: They don't internally accumulate -- they are based on a closed-form exponential/logarithmic computation, not iteration and summation over periods." How is that not relevant? – cbp Aug 07 '14 at 02:08
  • 1
    @cbp Jeffery's [answer](http://stackoverflow.com/a/2604298/577765) is entirely irrelevant because he's talking about the `Financial` _Class_, not financial functions in general (I fixed his answer). You did not read his _entire_ answer, which makes it clear that `decimal` is better for financial operations. He says: "_By using decimal arithmetic, you avoid introducing and accumulating round-off error._" and "_They are intended ... for decision support, and ... have little applicability to actual bookkeeping._" and "_but once that amount is determined, [everything else] happens in `decimal`._". – Solomon Rutzky Sep 05 '16 at 13:42
  • @cbp Also, as pointed out in this [VB forum post](http://www.vbforums.com/showthread.php?524101-Decimal-or-Double-Money-Currency&p=3368121&viewfull=1#post3368121), even though VB **.NET** has `Decimal`, VB **6** did not, and the `Microsoft.VisualBasic.Financial` functions use `Double` to be completely backwards compatible, not because it was the "proper" datatype for the operation. – Solomon Rutzky Sep 05 '16 at 14:15
  • ***For money, always decimal*** ,or not true, or _For money, always decimal, only for C#_ ? – Kiquenet Feb 22 '17 at 10:10
  • 1
    Why not use int to represent money? divide by 100 in your views – Arijoon May 10 '17 at 12:57
  • 3
    @Arijoon I believe that's essentially what the decimal type is doing for you, except that it can divide by numbers greater than 100 as appropriate because some systems keep more than 2 decimal places of accuracy for money. (Bitcoin, for example.) The key is that decimal type is dividing by a power of 10 whereas double is dividing by a power of 2. – BlueMonkMN Nov 13 '17 at 15:38
201

My question is when should a use a double and when should I use a decimal type?

decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.

double for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes - double covers more than 10^(+/-300). Scientific calculations are the best example here.

which type is suitable for money computations?

decimal, decimal, decimal

Accept no substitutes.

The most important factor is that double, being implemented as a binary fraction, cannot accurately represent many decimal fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit for decimal. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal supports these; double does not.

Samuel Slade
  • 8,405
  • 6
  • 33
  • 55
Michael Borgwardt
  • 342,105
  • 78
  • 482
  • 720
  • 2
    There is no doubt that `double` is not to be used when representing financial values, but what did you exactly mean when you wrote that `double` does not support specific rounding modes, compared to a `decimal`? AFAIK, `Math.Round` has overloads which accept the `MidpointRounding` parameter for both `double` and `decimal`? – vgru Sep 28 '11 at 12:12
  • 2
    @Groo: I guess I must have looked at the .Net 1.1 API, the method was added in 2.0 - but it's still kinda pointless due to the problems with binary fractions. There's an example in the current API doc that illustrates this problem. – Michael Borgwardt Sep 28 '11 at 13:27
  • Saw this line in many comparisions but not able to understand the meaning. Can you kindly elaborate? "Double cannot accurately represent many decimal fractions (like 0.1) at all " – Imad Jun 24 '18 at 14:16
  • 1
    @lmad: I have a website for that: https://floating-point-gui.de/ - basically it's the same reason why decimal numbers cannot accurately represent 1/3 – Michael Borgwardt Jun 25 '18 at 06:31
  • 3
    @MichaelBorgwardt when you said "decimal, *decimal*, **decimal**", which one should I use? – Shadi Alnamrouti Jan 08 '19 at 13:08
45

According to Characteristics of the floating-point types:

.NET Type C# Keyword Precision
System.Single float ~6-9 digits
System.Double double ~15-17 digits
System.Decimal decimal 28-29 digits

The way I've been stung by using the wrong type (a good few years ago) is with large amounts:

  • £520,532.52 - 8 digits
  • £1,323,523.12 - 9 digits

You run out at 1 million for a float.

A 15 digit monetary value:

  • £1,234,567,890,123.45

9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:

A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

When should I use double instead of decimal? has some similar and more in depth answers.

Using double instead of decimal for monetary applications is a micro-optimization - that's the simplest way I look at it.

KyleMit
  • 30,350
  • 66
  • 462
  • 664
Chris S
  • 64,770
  • 52
  • 221
  • 239
  • 1
    `520,532.52` has 8 significant number and `1,323,523.12` has 9 http://mathsfirst.massey.ac.nz/Algebra/Decimals/SigFig.htm – Royi Namir Apr 06 '14 at 14:32
  • 1
    The `float`, `double`, and `decimal` links in your post are broken. Here is a link to the latest MSDN documentation on all three numeric type aliases: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types – Mass Dot Net Oct 01 '20 at 17:06
43

Decimal is for exact values. Double is for approximate values.

USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
Ian Boyd
  • 246,734
  • 253
  • 869
  • 1,219
  • 15
    Decimal is not for exact values. Decimal provides 28-29 decimal digits of accuracy according to the documentation. Decimal does not perform analytical arithmetic and is therefore not "exact". Decimal is great for money, because even with values in the trillions of dollars, it still leaves you with 10 digits of insulation from cumulative arithmetic error, while still being able to accurately round to cents. – Triynko Mar 21 '12 at 22:24
  • 7
    Why is the exchange rate double and not decimal? Isn't that also simply the price of 1 USD in CAD? – gerrit Nov 19 '12 at 17:28
  • 4
    @gerrit An exchange rate is not the *"price"* of 1 USD in CAD. It is the *ratio* of the value of the two. Depending on your source determines how many decimal places you'll be given. For example, 1 USD is worth 1.0016 CAD. 1 Great Britian Pound is worth 1.5909 CAD. 1 Vietnamese Dong is worth 0.000048 CAD. It's a *ratio* as such cannot realistically be truncated anywhere without losing precision. – Ian Boyd Nov 19 '12 at 18:33
  • 1
    @gerrit The 0.000048 is from the Bank of Canada. XE says one VND is worth 0.0000478405 Canadian. They are calculated as a division; which results in a floating point value. – Ian Boyd Nov 19 '12 at 18:41
  • No. Decimal is not exact. And for exchange rate in example above you should use decimal, since input and output are in base 10 (when using double there is loss of precision on base conversion, since there is no 5 in prime factorisation). – user2622016 Apr 17 '15 at 15:45
  • It is also true that exchange rates are not an exact reciprocal: often you get "less money" by going in one direction than the other, such that if you converted back and forth repeatedly, the amount would eventually go to zero. This is due to the two RATES being different, not due to a fee or even rounding. –  Jan 07 '16 at 18:34
28

For money: decimal. It costs a little more memory, but doesn't have rounding troubles like double sometimes has.

Samuel Slade
  • 8,405
  • 6
  • 33
  • 55
Clement Herreman
  • 10,274
  • 4
  • 35
  • 57
  • 15
    it has all the troubles with rounding: try `1m/3m + 1m/3m == 2m/3m`. The main difference is - more bits for significand, and most important: no precision loss when operating on numbers with 5 in prime factorisation of divisor. Eg. `1m/5m + 1m/5m` will be exactly equal to `2m/5m`. – user2622016 Apr 17 '15 at 15:47
10

Definitely use integer types for your money computations.
This cannot be emphasized enough since at first glance it might seem that a floating point type is adequate.

Here an example in python code:

>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0

looks pretty normal.

Now try this again with 10^20 Zimbabwe dollars:

>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0

As you can see, the dollar disappeared.

If you use the integer type, it works fine:

>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
Arsen Khachaturyan
  • 7,904
  • 4
  • 42
  • 42
Otto Allmendinger
  • 27,448
  • 7
  • 68
  • 79
  • 8
    You don't even need very large/small values to find differences between doubles base2 approximation and actual base 10 values, many small values cannot be accurately stored. Calculate "1 - 0.1 - 0.9" (make sure the compiler doesn't optimize out the equation), and compare it to zero. You'll find that with doubles the result is something like 2e-17 instead of 0 (make sure you run a compare, as many print/ToString functions round off doubles past a certain number of decimal places to remove these types of errors). – David Jul 22 '09 at 15:39
  • 3
    integer ?! and what happens when you have $1.5 ? – Noctis Dec 16 '14 at 02:20
  • 4
    @Noctis you'll come up with a solution if you think about it – Otto Allmendinger Dec 16 '14 at 11:37
  • 1
    :) there are many solutions, but he was talking about double vs decimal, so unless he's so far off, he'll need the decimal part...that's why your answer struck me weird. – Noctis Dec 16 '14 at 21:34
  • Some computer games (EVE) use integers. This gives a simple system without fraction of main money unit (cents). – Michael Chudinov Feb 27 '17 at 13:19
  • 3
    There's no reason to use `int` instead of `decimal` for accuracy purposes (maybe for performance reasons). Avoid `double`, but use `decimal`. Decimal uses a base-10 exponent so you don't encounter the same binary rounding errors that you do with double when parsing a base-10 value like 0.1. – BlueMonkMN Nov 13 '17 at 16:29
  • @Noctis he would probably suggest cooking up a custom class where you store the whole number and the decimal number as separate integers, which "decimal" has done already, probably with much better optimization too. – Daniel Wu Aug 05 '21 at 03:59
  • @DanielWu :rofl: ...maybe he would ... had to read the entire page to ground myself, as this was pre covid 19 era , and i can barely remember what happened last week :) – Noctis Aug 05 '21 at 06:46
  • C# has decimal specifically for the case of money, and answering in Python isn't helping the question. – Jayson Minard Sep 21 '22 at 17:56
6

I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2

http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html

ysrb
  • 6,693
  • 2
  • 29
  • 30
honzajscz
  • 2,850
  • 1
  • 27
  • 29