21

Possible Duplicate:
decimal vs double! - Which one should I use and when?

I'm using double type for price in my trading software. I've noticed that sometimes there are a odd errors. They occur if price contains 4 digits after "dot", like 2.1234.

When I sent from my program "2.1234" on the market order appears at the price of "2.1235".

I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.

The question is - where is the line? When to use decimal?

Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)

I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.

Community
  • 1
  • 1
Oleg Vazhnev
  • 23,239
  • 54
  • 171
  • 305
  • You shouldn't get problems with a double on such small numbers with few digits. A double is approximate to about 15 significant digits, e.g. 10 digits after the comma even with values in the thousands. I would check you're code since rounding error etc. aren't fixed by using a decimal... – dtech Jun 14 '11 at 10:00

11 Answers11

34

Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.

Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.

Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:

  • double has a larger range (it can handle very large and very small magnitudes);
  • decimal has more precision (has more significant digits);
  • you may need to use double to interact with some older APIs that are not aware of decimal;
  • double is faster than decimal;
  • decimal has a larger memory footprint;
R. Martinho Fernandes
  • 228,013
  • 71
  • 433
  • 510
  • 2
    And also for your list: double has much larger range on both the top and bottom ends, if you need to represent truly enormous or tiny numbers. – Eric Lippert Jun 14 '11 at 14:56
10

When accuracy is needed and important, use decimal.

When accuracy is not that important, then you can use double.

In your case, you should be using decimal, as its financial matter.

Nawaz
  • 353,942
  • 115
  • 666
  • 851
5

For financial operation I always use the decimal type

Sergey K
  • 4,071
  • 2
  • 23
  • 34
4

Use decimal it's built for representing powers of 10 well (i.e. prices).

George Duckett
  • 31,770
  • 9
  • 95
  • 162
3

Decimal is the way to go when dealing with prices.

3

If it's financial software you should probably use decimal. This wiki article summarises quite nicely.

Matt Bond
  • 1,402
  • 12
  • 19
3

A simple response is in this example:

decimal d = 0.3M+0.3M+0.3M;
            bool ret = d == 0.9M; // true
            double db = 0.3 + 0.3 + 0.3;
            bool dret = db == 0.9; // false

the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.

Felice Pollano
  • 32,832
  • 9
  • 75
  • 115
1

Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)

dtech
  • 13,741
  • 11
  • 48
  • 73
  • Whilst I agree with your conclusions, have you a reference to back-up your claim that "decimal is specifically meant for money"? Isn't it useful for any type of calculation requiring high-precision? – Dan Diplo Jun 14 '11 at 10:06
  • 4
    @Dan: I refer you to section 4.1.7 of the C# specification which states **"The decimal type is a 128-bit data type suitable for financial and monetary calculations"**. That's what it was added to the language to do. – Eric Lippert Jun 14 '11 at 14:57
  • @Eric Lippert Thanks, can't get more authoritative than that! – Dan Diplo Jun 14 '11 at 15:13
1

There is an Explantion of it on MSDN

Rajeev
  • 4,571
  • 2
  • 22
  • 35
0

As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.

However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.

A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.

Martin Liversage
  • 104,481
  • 22
  • 209
  • 256
  • "e.g. integrals as used by actuary computations" - citation please? I would have thought greater precision of decimal would be better, although slower. – kristianp Apr 10 '12 at 06:40
  • @kristianp: For financial computations you should normally use decimals. The problem is not the lack of precision in floats (the precision is high) but the fact that numbers like 0.1 cannot be exactly represented using floating point numbers. However, some financial computations involves more than the simple algebraic operations of adding and multiplying numbers (for instance actuary computations). Unless you want to create your own `log` or `sin` functions for decimals you will have to convert the decimals to floats and use a math library. – Martin Liversage Apr 10 '12 at 06:58
0

If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.

The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.

The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.

On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

lyml
  • 83
  • 3
  • thanks that make sense. I think that's probably how `decimal` works :) – Oleg Vazhnev Jun 14 '11 at 10:17
  • 1
    How often does multiplying a monetary value with another, make sense anyway? ;) – Stein G. Strindhaug Jun 14 '11 at 10:39
  • 2
    @Stein: Interest rates are quoted in tenths and hundredths of a percent. You multiply those by money all the time. You want the calculation of 123456.78m * 0.0325m * / 12m to be accurate when its your monthly mortgage interest, right? **Use decimal for multiplying financial quantities.** – Eric Lippert Jun 14 '11 at 14:59
  • Yes, but the answer will be the same if you multiply the interest rate on cents or dollars with decimals 1% of 1$ (1*0.01=0.01) is the same as 1% of 100 cents (100*0.01=1) (assuming 100 cents per dollar, I'm not from USA). Unless there is some reason to multiply dollars by dollars (resulting in square-dollars?) there is no need for conversion before multiplication by percentage decimal values, no matter if its stored as a decimal dollar or integer cents. -- But of course if Decimal in C# is intended for money, it's better to use than rolling you own. – Stein G. Strindhaug Jun 15 '11 at 10:30
  • 1
    Btw, according to Google Calculator, square dollars exists: http://www.google.com/search?q=2%24+times+4%24 ...or they blindly calculates an answer no matter how stupid the question ;) – Stein G. Strindhaug Jun 15 '11 at 10:36