2

The following C# code:

int n = 3;
double  dbl = 1d / n;
decimal dec = 1m / n;
Console.WriteLine(dbl * n == 1d);
Console.WriteLine(dec * n == 1m);

outputs

True
False

Obviously, neither double nor decimal can represent 1/3 exactly. But dbl * n is rounded to 1 and dec * n is not. Why? Where is this behaviour documented?

UPDATE

Please note that my main question here is why they behave differently. Presuming that the choice of rounding was a conscious choice made when IEEE 754 and .NET were designed, I would like to know what were the reasons for choosing one type of rounding over the other. In the above example double seems to perform better producing the mathematically correct answer despite having fewer significant digits than decimal. Why did the creators of decimal not use the same rounding? Are there scenarios when the existing behaviour of decimal would be more beneficial?

Sergey Slepov
  • 1,861
  • 13
  • 33
  • 1
    double => IEEE 754 ... decimal obvious => let assume that decimal has N precision 1/3 = 0.333333... up to N place .. so 0.3333 ... * 3 is 0.9999 .... still up to N place – Selvin Nov 08 '18 at 12:23
  • 1
    Before the `Console.WriteLine` lines, what is the value of `dec`. That answers the `dec` side of the question. For the `dbl` side of the question - what documentation are you looking for? How is this inconsistent with your understanding of floating point maths? _Note that the behaviour of `dbl` is certainly not **guaranteed** - it will likely work differently on different CPUs and runtimes._ – mjwills Nov 08 '18 at 12:28
  • 5
    @Sinatr: `decimal` doesn't "restore itself", that's the point. `0.33....3 * 3 == 0.99...9`. The result can be represented exactly. This doesn't happen for `double` because the multiplied fraction has more (binary) digits than the ultimate result, and rounding brings things back to `1`. There's no (integer) `n` that would result in the same outcome for `decimal`, but the same effect can be achieved by using a `decimal n` of `0.3`. – Jeroen Mostert Nov 08 '18 at 13:09
  • @JeroenMostert, I've deleted comment (because I made a mistake) before you post yours, sorry. So you are saying what `1m / 3` multiplied by `3` is not `1`, good, can you explain slowly and clearly why? There are at least 3 people here who don't know. – Sinatr Nov 08 '18 at 13:32
  • @Sinatr: you're right, my comment was too glib. The IEEE behavior for the results of floating-point multiplication is well documented, but the same is not true for `decimal`. It does round, of course, but not in the same way. The .NET Core code for it is [here](https://github.com/dotnet/coreclr/blob/master/src/System.Private.CoreLib/shared/System/Decimal.DecCalc.cs#L1354). Reversing this to deduce exactly how it rounds is an exercise I have to defer. – Jeroen Mostert Nov 08 '18 at 13:59
  • 2
    https://stackoverflow.com/questions/618535/difference-between-decimal-float-and-double-in-net – Daniel A. White Nov 08 '18 at 15:24
  • @Daniel, thanks but I couldn't find the answer to my question on that page. – Sergey Slepov Nov 08 '18 at 16:20
  • @JeroenMostert, thanks for your comment. I agree. Would you care converting it to an answer? – Sergey Slepov Nov 08 '18 at 18:03
  • Not without actually studying the `decimal` multiplication algorithm and explaining how it works in detail, which I'm not willing to make the time for. (And I certainly have no insight into *why* `decimal` works the way it does.) – Jeroen Mostert Nov 08 '18 at 18:22
  • 1
    @JeroenMostert, as you said, the reason why decimal seems to behave differently to double is not that they use different type of rounding but there is no overflow to trigger the rounding. The result is represented exactly in a decimal (0.33....3 * 3 == 0.99...9) while for double it is not. I think this answers my question. – Sergey Slepov Nov 08 '18 at 18:39
  • No, that comment was wrong! I should probably delete it, but there'd be some loss of context. `0.3333...` in a `decimal` occupies all the (binary!) digits there are. Multiplying it by `3` would definitely result in an overflow of the available (binary!) digits, if it were not for `decimal` rounding the results -- in a manner different from how `double` does it. If `decimal` actually stored decimal digits internally, this explanation would hold some water, but it does not. It may emulate this behavior through its rounding method, but it's too simplistic to say there's no overflow. – Jeroen Mostert Nov 08 '18 at 18:42

1 Answers1

0

test:

int n = 3;
double  dbl = 1d / n;
decimal dec = 1m / n;

Console.WriteLine("/n");
Console.WriteLine(dbl);
Console.WriteLine(dec);

Console.WriteLine("*n");
Console.WriteLine(dbl * n);
Console.WriteLine(dec * n);

result:

/n
0.333333333333333
0.3333333333333333333333333333
*n
1
0.9999999999999999999999999999

decimal saved with base 10, double and single - base 2. probably for double after 3*0.333333333333333 in cpu will be binary overload and result 1. but 3*0.3---3 with base 10 - no overload, result 0.9---9

AndrewF
  • 33
  • 3
  • No. The significand of `decimal` is implemented as a 96-bit integer. It doesn't use a binary coded decimal. Multiplying the 96-bit integer representing `333333..` by `3` does not fit in the result, but `decimal` rounds this in such a way that the outcome is `0.9999...` and not `1`. If your explanation was correct, then `1m / 0.3m * 0.3m` should also not end up as `1` (since all operands are exactly representable), but it does. – Jeroen Mostert Nov 08 '18 at 18:19
  • @JeroenMostert ok, if mantissa 96 bit integer reason can be in: 1) exponent base: base-2 for float and base-10 for decimal. (different in moving point in source binary integer or after it translated into base-10 number) 2) our 0.3 can be something periodical in binary base-2, that can make back effect. And maybe not periodical in binary with base-10 - just like 11b*[10dec]^[-1b]. 3) also can make effect normalized store for double and not for decimal. – AndrewF Nov 09 '18 at 10:34
  • @JeroenMostert about 1m / 0.3m * 0.3m. Maybe (just maybe): 4a) 1m / 0.3m => dec 3.33333---333 => for representing ALL 3: [33---33d*10d^-27d] => long binary 96 bit for saving all decimal 3: [10101000101010101b...*10d^-27d] (not real binary here, just..) 4b) for multiply 0.3 maybe can be scaled to same 27d like [0300---00d*10d^-27d] => [01...b*10d^-27] 4c) after operation on binary of 96 bit mantissas can be overflow and appear resulting 1. Or, sure, maybe something else. – AndrewF Nov 09 '18 at 10:35