This is because double
has limited precision, while decimal
stores decimal digits. Read more here: Difference between decimal, float and double in .NET?
Basically, decimal
is better at storing decimal representation of the number.
Edit: answering your original question more explicitly: Both of your results are kinda incorrect since -36.845167 cannot be represented as double
. Check out the output of this expression:
result.ToString("G20")
on both of your results and you will see that both of them are not equal to -36.845167: one of them is -36.845166999999996 and other is -36.845167000000004.
So, both of them are 4e-15 off your original number. What you really see in the debugger (or upon outputting to console) is just the rounding during the conversion to string.