2

For example,

0.0000000000000000000000000001

is represented as (lo mid hi flags):

1 0 0 1c0000

When the above is divided by 10, the result is (lo mid hi flags)

0 0 0 0

But when it is multiplied by 0.1M, the result is (lo mid hi flags)

0 0 0 1c0000

In other words, according to Decimal, 0.0000000000000000000000000001 multiplied by 0.1 is 0.0000000000000000000000000000. But divided by 10 it is 0.

The following shows different results:

var o = 0.0000000000000000000000000001M;
Console.WriteLine($"{o * 0.1M}");
Console.WriteLine($"{o / 10M}");

I need to be able to replicate this behaviour and all other Decimal arithmetic in a virtual machine. Can someone point me to a spec or explain the rationale? System.Decimal.cs does not seem to offer insights.

UPDATE: so it seems this is just a bug in the decimal multiply implementation. Operators should preserve the scale (according to IEEE 754 2008) but multiply does not.

Frank
  • 903
  • 7
  • 14
  • 2
    @Matthew Watson, That's not relevant to the question – ikegami Oct 21 '22 at 15:09
  • 1
    @Frank, I'm assuming new Decimal( 0, 0, 0, 0, 0 ) == new Decimal( 0, 0, 0, 0, 0x1C )? If so, you seem to be asking about internal details of a specific implementation/version of .NET. Yet there's no mention of version anywhere in your question. // As for the rational for using the different representations of the same value? It probably simply uses the one that's the most natural in each circumstance. – ikegami Oct 21 '22 at 15:14
  • @ikegami well numerically they are equal yes but the underlying representation is different and the display output is different, which means that program behaviour is different, and therefore I would not expect the implementation to differ from version to version. I will update the question with an example. – Frank Oct 21 '22 at 15:19
  • 2
    Re "*but the underlying representation is different*", Not relevant in of itself. // Re "*the display output is different,*", but this is – ikegami Oct 21 '22 at 15:20
  • @ikegami yes, agreed (see update) - note that for Decimal the representation directly drives display output and etc... – Frank Oct 21 '22 at 15:21

1 Answers1

3

The language spec says

The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as “banker’s rounding”). That is, results are exact to at least the 28th decimal place. Note that rounding may produce a zero value from a non-zero value.

Decimal has a precision of 28 decimal places. The nearest representable value in your example is zero.

decimal d28 = 1e-28m; // 0.0000000000000000000000000001
d28 / 10

result: 0.

The class implementation is available here. Math operators are implemented in a helper class (DecCalc) here.

link to multiplication

link to division

Minor note from the source (int[] bits constructor), about different representations (significant digits) being numerically equivalent

// Note that there are several possible binary representations for the
// same numeric value. For example, the value 1 can be represented as {1,
// 0, 0, 0} (integer value 1 with a scale factor of 0) and equally well as
// {1000, 0, 0, 0x30000} (integer value 1000 with a scale factor of 3).
// The possible binary representations of a particular value are all
// equally valid, and all are numerically equivalent.
BurnsBA
  • 4,347
  • 27
  • 39
  • This, if I understand you correctly, is irrelevant to the question. – Frank Oct 21 '22 at 17:12
  • Though i see your deccalc link may help answer it. Thanks. – Frank Oct 21 '22 at 17:15
  • 1
    hi @frank, maybe I misunderstood the question. "Can someone point me to a spec" -> provided link to spec. "I need to be able to replicate this behaviour and all other Decimal arithmetic" -> provided link to math implementations. Maybe you can clarify what you are asking? – BurnsBA Oct 21 '22 at 17:15
  • yes sorry perhaps i responded too soon. I will look at those links. Thanks again. – Frank Oct 21 '22 at 17:16
  • 1
    @Frank: That final comment seems to answer your question... although it doesn't indicate what is specified, it indicates that the actual implementation considers them all to be correct results, meaning that it makes no effort to guarantee which of several different encodings of the same value will results from any arithmetic operation. – Ben Voigt Oct 21 '22 at 17:21
  • @BenVoigt i dont take that from that last comment. The result of tostring should be deterministic . but i just need to get to my pc . 5 mins – Frank Oct 21 '22 at 17:23
  • @BurnsBA Thanks, just got back to PC. Sorry for the earlier terse response - was typing with one greasy finger on mobile and cooking in the kitchen with the other hand. Will check those links. – Frank Oct 21 '22 at 17:29
  • @Frank: While you and I might like deterministic `ToString()` results, the formatted string depends on the binary representation, and the implementers wrote they don't care about differences in the binary representation they give you, so in conclusion the chain of (arithmetic operations then `ToString()`) doesn't guarantee how many trailing zeros are in the string. – Ben Voigt Oct 21 '22 at 17:32
  • @BurnsBA OK got it. I was missing the partial in partial struct in the main Decimal.cs so that link to the DecCalc class will explain everything. Many thanks! – Frank Oct 21 '22 at 17:34
  • @BenVoigt Thanks. It's a philosophical conundrum for me now. I need to accurately mimic the .NET decimal behaviour , but I am not sure to what extent. – Frank Oct 21 '22 at 17:38
  • Follow up question: "result (preserving scale, as defined for each operator)" The multiply operator does not preserve the scale. Is this a bug therefore? (@BurnsBA) – Frank Oct 21 '22 at 17:44
  • @Frank can't exactly say. Spec doesn't say much, therefore it's up to the implementation. CoreCLR only specifies that representations are numerically equivalent, which is already true. So your example shows an inconsistency in ToString, but the CoreCLR has inconsistencies about other things .... You can try to open a github issue for an answer (probably about not scaling multiplication when high bits are zero, rather than ToString). There's an equally likely chance this could be logged as a bug (and fixed who knows when), or marked as "not a bug" to stay consistent with past functionality. – BurnsBA Oct 21 '22 at 18:54
  • @BurnsBA yes i understand thanks. That is unfortunate. I think it is a cultural shift. In yesteryear the lack of determinism in such a core library would have been seen as a dramatic problem. More often now ignoring obscure errors is considered pragmatism. – Frank Oct 21 '22 at 18:58
  • @BurnsBA Just an update in case it interests. I got the IEEE 754 2008 spec. The spec is unreadable garbage. But it does talk about "preferred" scales and defines them for each operator. Also earlier version of .NET implemented operators differently. It seems that the real issue is the quality of the IEEE spec. It's totally horrendous. – Frank Oct 24 '22 at 11:03