0

When I multiply two decimal numbers, for some reason there is a trailing 0 at the end of the number. This happens if the last digit of both decimal numbers have a product that is a multiple of 10.

Here are 2 examples:

decimal a = 0.4M;
decimal b = 1.05M;
Console.WriteLine(a * b); // 0.420

decimal c = 0.06M;
decimal d = 1.15M;
Console.WriteLine(c * d); // 0.0690

Why does this happen? And is there a way to remove this trailing 0 from the decimal?

My desired result would be 0.42 / 0.069 instead of 0.420 / 0.0690 as a decimal.

eddex
  • 1,622
  • 1
  • 15
  • 37
  • The simple answer is `Math.Round(a * b, 2);`, but I'm still curious why this happens. – eddex Sep 16 '21 at 10:20
  • The reason is because of the [binary representation of `decimal`](https://learn.microsoft.com/en-us/dotnet/api/system.decimal.getbits?view=net-5.0#System_Decimal_GetBits_System_Decimal_). There are multiple ways to represent the value 0.42 or 0.069, and unlike IEEE 754 (`float` & `double`), there's no normalisation. – Sweeper Sep 16 '21 at 10:31
  • https://stackoverflow.com/questions/2996775/why-does-a-c-sharp-system-decimal-remember-trailing-zeros – Jamiec Sep 16 '21 at 10:31
  • @eddex why do you care? Trailing zeroes don't matter. There's no reason to use `Round` - there's nothing to round. If you want to display decimals in a certain way you'll have to use a format string – Panagiotis Kanavos Sep 16 '21 at 10:33
  • @eddex besides, that's how math works. Multiplying two decimal numbers will produce a number whose fractional digits are the sum of the operands' fractional digits. – Panagiotis Kanavos Sep 16 '21 at 10:35
  • @PanagiotisKanavos: that may be the case for math (that can assume infinite precision) but we were taught differently in the sciences. Calculations with `n` significant digits generally resulted in `n` significant digits in the result, because the value `x = 3.5` can be mean `3.45 <= x < 3.55`. In math, the value `3.5` is `3.5000000000000000000...` :-) – paxdiablo Sep 16 '21 at 10:59

1 Answers1

3

C# decimals don't just store the value, they also store information about the precision. You can see that with code like:

decimal d1 = 0M;
decimal d2 = 0.00M;
 
Console.WriteLine(d1);  // 0
Console.WriteLine(d2);  // 0.00

This accuracy can be changed when multiplying or dividing:

decimal d1 = 10M;
decimal d2 = 10.00M;
decimal d3 = 5.0M;
 
Console.WriteLine(d1 * d3); // 50.0
Console.WriteLine(d2 * d3); // 50.000

If you don't want to use default output formats, you need to use specific ones, such as d3.ToString("0.#####"), which will include up to five significant digits after the decimal point.

The following complete program shows all the effects above, plus a way to do that specific formatting (the final lines showing how to get fixed places after the decimal point for things like currency):

using System;
                    
public class Program {
  public static void Main() {
    decimal d1 = 0M;
    decimal d2 = 0.00M;
    Console.WriteLine(d1); // 0
    Console.WriteLine(d2); // 0.00

    d1 = 7M;
    d2 = 10.00M;
    decimal d3 = 5.0M;
    Console.WriteLine(d1 * d3); // 35.0
    Console.WriteLine(d2 * d3); // 50.000

    d1 = 1234567.89000M;
    Console.WriteLine(d1);                     // 1234567.89000
    Console.WriteLine(d1.ToString("0.#####")); // 1234567.89

    Console.WriteLine(d2 * d3);                  // 50.000 (see above)
    Console.WriteLine((d2 * d3).ToString("F3")); // 50.00
  }
}
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953