This might be relevant, so I want to start by pointing out the Language Specification, section 4.1.7
If one of the operands of a binary operator is of type decimal, then the other operand must be of an integral type or of type decimal. If an integral type operand is present, it is converted to decimal before the operation is performed.
The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as “banker’s rounding”). A zero result always has a sign of 0 and a scale of 0.
This tells you results3 and 4 in your test should give identical results. (Also note the "integral" word, implicit float/double conversion is not supported)
Now, in your case you've stumbled upon an equation with some nice simplification properties, in that 196.5 / 12 => 393 * (2/3) / 12 => 131 * 2 / 4. In your results3&4, you calculate the division by 3 first (1/12), which gives 0.0833...
, something that can't be exactly represented in decimal
. And then you scale 0.0833...
up (your order of operations is to divide by 12 then multiple by b
).
You can get the same result by first rounding a number that can't be represented in decimal
, e.g., something with repeating digits, say 1/7m
. For example, 917m * (1 / 7m) = 131.00000000000000000000000004
but note 917m/7m = 131
.
You can mitigate this by preferring multiplications first (being careful of overflow). The other option is to round your results.
This is probably a dupe of something like Is C# Decimal Rounding Inconsistent?
or maybe
Rounding of decimal in c# seems wrong...