58

In C#, what data type should I use to represent monetary amounts? Decimal? Float? Double? I want to take in consideration: precision, rounding, etc.

Nosredna
  • 83,000
  • 15
  • 95
  • 122
A Salcedo
  • 6,378
  • 8
  • 31
  • 42

7 Answers7

84

Use System.Decimal:

The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.

Neither System.Single (float) nor System.Double (double) are precise enough capable of representing high-precision floating point numbers without rounding errors.

Andrew Hare
  • 344,730
  • 71
  • 640
  • 635
  • 16
    I've upvoted this, but I take issue with the final claim. It's not that float/double aren't precise enough - it's just that they use an inappropriate base for money. You could have a 512 bit floating binary point value with more actual *precision* than decimal - but it still wouldn't be appropriate because it couldn't represent decimal values such as 0.1 exactly. – Jon Skeet Jun 17 '09 at 18:43
  • My mistake - I mixed meanings in my answer. I used "precise" in my final statement not to mean "mathematical precision" but rather to express that a rounded number (1 in this example) is "less precise" than (0.99999_) when the number assigned to the variable was in fact 0.9999_. Poor choice of words on my part... – Andrew Hare Jun 17 '09 at 18:46
  • Yes, even a float is _precise_ enough. The problem comes when converting from decimal to binary and back. The two different bases have different sets of irrational numbers. – Nosredna Jun 17 '09 at 18:47
  • 1
    I still don't agree with your last line. I'd say: "Any floating point system that relies on a binary mantissa is incapable of representing all decimal hundredths as rational numbers. – Nosredna Jun 17 '09 at 18:52
  • 1
    Nosredna, you are running into problems here because I think you have a misunderstanding of what "irrational" means. Irrationality is invariant over choice of base; an exact real value is either irrational or it isn't. Choice of base is irrelevant. The distinction you actually want to be making here has nothing to do with rationality but rather has to do with _representation error_. – Eric Lippert Jun 17 '09 at 20:13
  • 1
    Eric, good point. 1/10 is representable in decimal with a fixed number of digits, but not in binary. In binary it's a repeating fraction. – Nosredna Jun 17 '09 at 20:15
  • 4
    What you should be saying is that any *finite* floating point system with a binary mantissa is incapable of *exactly* representing all decimal hundredths. That is, *without accruing representation error*. For more analysis of basic issues in representation error in floating point arithmetic, you might want to see the series of articles I wrote on this topic: http://blogs.msdn.com/ericlippert/archive/tags/Floating+Point+Arithmetic/default.aspx – Eric Lippert Jun 17 '09 at 20:22
4

Use decimal and money in the DB if you're using SQL.

Fiur
  • 632
  • 1
  • 6
  • 20
4

In C#, the Decimal type actually a struct with overloaded functions for all math and comparison operations in base 10, so it will have less significant rounding errors. A float (and double), on the other hand is akin to scientific notation in binary. As a result, Decimal types are more accurate when you know the precision you need.

Run this to see the difference in the accuracy of the 2:

using System;
using System.Collections.Generic;
using System.Text;

namespace FloatVsDecimal
{
    class Program
    {
        static void Main(string[] args) 
        {
            Decimal _decimal = 1.0m;
            float _float = 1.0f;
            for (int _i = 0; _i < 5; _i++)
            {
                Console.WriteLine("float: {0}, decimal: {1}", 
                                _float.ToString("e10"), 
                                _decimal.ToString("e10"));
                _decimal += 0.1m;
                _float += 0.1f;
            }
            Console.ReadKey();
        }
    }
}
cabgef
  • 1,398
  • 3
  • 19
  • 35
3

Decimal is the one you want.

TWA
  • 12,756
  • 13
  • 56
  • 92
1

Consider using the Money Type for the CLR. It is a custom value type (struct) that also supports currencies and handles rounding off issues.

Mark Menchavez
  • 1,651
  • 12
  • 15
  • I was reading "Adaptive code via C#" book (from 2014) and it says: "it is not advisable to use the decimal type to represent currency values (...). Instead a Money (url same as in the above) value type should be used. I was confused as I had always been told to use decimal. The author doesn't argue why we shouldn't use decimal. Based on that and what George wrote, I guess decimal is the way to go. – Marshall Sep 27 '17 at 13:02
  • @Marshall My guess would be because `decimal` doesn't carry any information about the currency we're dealing with. Dollars? Euros? Something else? – Alonso del Arte Oct 17 '20 at 03:56
1

In C# You should take "Decimal" to represent monetary amounts.

Debendra Dash
  • 5,334
  • 46
  • 38
0

For something quick and dirty, any of the floating point primitive types will do.

The problem with float and double is that neither of them can represent 1/10 accurately, occasionally resulting in surprising trillionths of a cent. You've probably heard of the infamous 10¢ + 20¢. More realistically, try calculating a 6% sales tax on three items valued at $39.99 each pre-tax.

Also, float and double have values like negative infinity and NaN that are of no use whatsoever for representing money. So decimal, which can represent 1/10 precisely would seem to be the best choice.

However, decimal doesn't carry any information about what currency we're dealing with. Does the amount $29.89, for example, equal €29.89? Is $29.89 > €29.89? How do I make sure these amounts are displayed with the correct currency symbols?

If these sorts of details matter for your program, then you should either use a third-party library or create your own CurrencyAmount class (or whatever you want to call it).

But if that sort of thing doesn't matter to the program, you can just use a floating point type. Or maybe even integers (e.g., my blackjack implementation in Java asks the player to enter a wager in whole dollars).

Alonso del Arte
  • 1,005
  • 1
  • 8
  • 19