2

This is just a question to advance my own knowledge because I found something not behaving as expected. Consider the following code

decimal d0 = 0;
decimal d1 = 0M;
decimal d2 = 0.0M;

string s0 = d0.ToString();
string s1 = d1.ToString();
string s2 = d2.ToString();

The debugger will show d0 and d1 as 0 in a watch window. But it will show d2 as 0.0. The strings s0 and s1 will contain "0". But s2 will contain "0.0".

Why is this? Of course 0 == 0.00. In fact (d1 == d2) returns true. So why does C# treat these differently internally?

Paul
  • 5,700
  • 5
  • 43
  • 67
  • 4
    It doesn't matter what you see in debugger, what matter is [internal decimal representation](http://stackoverflow.com/a/15348989/1997232), which makes `0` and `0.0` seems different. [Duplicate](http://stackoverflow.com/q/5759355/1997232). – Sinatr Oct 14 '15 at 14:49
  • 11
    _"The scaling factor also preserves any trailing zeros in a Decimal number. Trailing zeros do not affect the value of a Decimal number in arithmetic or comparison operations. However, **trailing zeros might be revealed by the ToString method if an appropriate format string is applied**."_ https://msdn.microsoft.com/en-us/library/system.decimal(v=vs.110).aspx – Tim Schmelter Oct 14 '15 at 14:49
  • 1
    Try calling `decimal.GetBits(d0)` and compare the results with `decimal.GetBits(d1)`. – DavidG Oct 14 '15 at 14:51
  • @DavidG You are right. `GetBits()` returns differently. `[0,0,0,0]` vs `[0,0,0,65536]`. – Paul Oct 14 '15 at 14:53
  • @TimSchmelter Brilliant, thank you! – Paul Oct 14 '15 at 14:57
  • When I say `d0 = 0`, I am expressing an *infinite* amount of significant digits. The answer is **0**. That's it. Yet it internally seems to think I was unsure and that my significant digits are unknown. Guess I'm not a fan of this behavior. – Paul Oct 14 '15 at 15:07

0 Answers0