I noticed that many libraries (for example, Math.net, NodaTime, TALib .NET port (library for technical analysis), StackExchange client for Redis db) use Double instead of Decimal. Why is that?
I know Double calculations are faster, but there are problems with precision when using Double, are they not?
More concrete example - I have a project where I work with conversion rates for currencies and money in general. Source data usually have no more than 4 decimal places (e.g. 54.9320
, and I require at least 6 decimal places for all the calculations. Currently I use decimal everywhere. Am I wrong? Should I switch to double?
Asked
Active
Viewed 233 times
0

chester89
- 8,328
- 17
- 68
- 113
-
`decimal` is generally for monetary calculations, and `double` is for scientific calculations. Both represent floating-point numbers and thus both have precision issues - they just have different precision characteristics. – Enigmativity Aug 16 '16 at 08:46
-
@Enigmativity so if I want precision, I should go with Decimal? – chester89 Aug 16 '16 at 08:48
-
No, both have precision issues. If you are doing monetary calculations then use `decimal`. If you are doing scientific calculations then use `double`. – Enigmativity Aug 16 '16 at 08:50
-
1@chester89: It depends on what you do. If you need to exactly represent base-10 numbers (rare, mostly for monetary stuff), then use `decimal`. In most other cases you should use `double`, which is also faster, since it doesn't have to be emulated. Neither is better than the other regarding numeric precision. They're just different. – Joey Aug 16 '16 at 08:50
-
Use Double when you don't need precision (exact value). For money or the numbers are an exact representation of a number, you should be used Decimal. – Ha Hoang Aug 16 '16 at 09:06
-
2Since your edit mentions your are dealing with money, **use decimal**. Also, this is pretty much a duplicate of [Difference between decimal, float and double in .NET](http://stackoverflow.com/q/618535/69809). – vgru Aug 16 '16 at 09:08
1 Answers
0
Normally Decimal is used for everything related to money/price, because Double's accuracy is only 16 decimal digits, in case of large amount, it might be error prone.
System.Single > float - 7 digits
System.Double > double - 15-16 digits
System.Decimal > decimal - 28-29 significant digits
Now coming to the question of why Libraries uses Double
, it might be because floating point math operations are natively supported by processors and it uses less memory. While decimal
type is only more accurate at representing base 10 numbers.

Nadeem_MK
- 7,533
- 7
- 50
- 61
-
1Two things to note: 1) number of decimals equates to *precision*, not accuracy, and 2) it's neither precision nor accuracy that's the issue, it's the fact that `float`/`double` are represented in base-2 and are not suitable for representing base-10 numbers commonly found in monetary applications. – vgru Aug 16 '16 at 09:19
-
Decimal is used for everything related to money/price, not because it is more precise. Even when dealing with the GDP of USA, 15 digits of precision is "to the cent". Clearly economics requiring `decimal` precision we need to step into [Death Star building economics](http://www.shortlist.com/entertainment/films/the-cost-of-the-death-star). No, we use decimal because we are doing decimal (base10) maths. – Aron Aug 16 '16 at 09:27