4

I want to efficiently ensure a decimal value has at least N (=3 in the example below) places, prior to doing arithmetic operations.

Obviouly I could format with "0.000######....#" then parse, but it's relatively inefficient and I'm looking for a solution that avoids converting to/from a string.

I've tried the following solution:

decimal d = 1.23M;
d = d + 1.000M - 1;
Console.WriteLine("Result = " + d.ToString()); // 1.230

which seems to work for all values <= Decimal.MaxValue - 1 when compiled using Visual Studio 2015 in both Debug and Release builds.

But I have a nagging suspicion that compilers may be allowed to optimize out the (1.000 - 1). Is there anything in the C# specification that guarantees this will always work?

Or is there a better solution, e.g. using Decimal.GetBits?

UPDATE

Following up Jon Skeet's answer, I had previously tried adding 0.000M, but this didn't work on dotnetfiddle. So I was surprised to see that Decimal.Add(d, 0.000M) does work. Here's a dotnetfiddle comparing d + 000M and decimal.Add(d,0.000M): the results are different with dotnetfiddle, but identical when the same code is compiled using Visual Studio 2015:

decimal d = 1.23M;
decimal r1 = decimal.Add(d, 0.000M);
decimal r2 = d + 0.000M;
Console.WriteLine("Result1 = " + r1.ToString());  // 1.230 
Console.WriteLine("Result2 = " + r2.ToString());  // 1.23 on dotnetfiddle

So at least some behavior seems to be compiler-dependent, which isn't reassuring.

Joe
  • 122,218
  • 32
  • 205
  • 338
  • 3
    Why? What's the difference between `1.23` and `1.23000`? – Zohar Peled Nov 05 '17 at 14:01
  • @ZoharPeled - the difference is precision: 1.23 is a value that is accurate to two decimal places, and 1.23000 is accurate to 5 decimal places. – Joe Nov 05 '17 at 14:09
  • And, once again, what’s the difference? Seriously, I’m curious. `1.23` and `1.23000` are both represented without error by `decimal` so why the need of the extra significant digits? Both are the same `decimal` number. I fail to see the added value here. – InBetween Nov 05 '17 at 18:09
  • @InBetween - the strings that I output after a series of calculation must have a number of decimals based on how the calculation was done. I don't propose to go into more detail, but if this were never needed, why did Microsoft go to the effort of making decimal arithmetic preserve trailing zeroes? – Joe Nov 05 '17 at 19:04
  • We’d have to ask MS ;) but it doesn’t necessarily mean anything; it could very well be that figuring out if the extra significant digits are actually insignificant and discarding them is more expensive than simply keeping them. But that’s besides the point, I’m just curious because I can’t seem to find a use case for this, but I’m pretty sure you have a valid one, don’t get me wrong. – InBetween Nov 05 '17 at 19:08
  • @InBetween - it wasn't just convenience; this behavior was introduced in .NET 1.1 for more strict conformance with the ECMA CLI specification: https://stackoverflow.com/a/1133880/13087 – Joe Nov 05 '17 at 19:14

2 Answers2

6

If you're nervous that the compiler will optimize out the operator (although I doubt that it would ever do so) you could just call the Add method directly. Note that you don't need to add and then subtract - you can just add 0.000m. So for example:

public static decimal EnsureThreeDecimalPlaces(decimal input) =>
    decimal.Add(input, 0.000m);

That appears to work fine - if you're nervous about what the compiler will do with the constant, you could keep the bits in an array, converting it just once:

private static readonly decimal ZeroWithThreeDecimals =
    new decimal(new[] { 0, 0, 0, 196608 }); // 0.000m

public static decimal EnsureThreeDecimalPlaces(decimal input) =>
    decimal.Add(input, ZeroWithThreeDecimals);

I think that's a bit over the top though - particularly if you have good unit tests in place. (If you test against the compiled code you'll be deploying, there's no way the compiler can get in there afterwards - and I'd be really surprised to see the JIT intervene here.)

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • Thanks for this, and it does seem like `Decimal.Add` is the way to go, with good unit tests as you say. But I'm still nervous about compiler-specific behavior - see update. My current thought is to have a helper method to round to N places which will add one from a static array of 29 constant values 0.0M, 0.00M, 0.000M, ... 0.00...29 decimals...0 Any thoughts about the different behavior between dotnetfiddle and VS2015 compilers? – Joe Nov 05 '17 at 14:33
  • @Joe: My guess is that dotnetfiddle may be using the Mono compiler behind the scenes, and maybe that's got an invalid optimization. (I really believe this *is* an invalid optimization.) But as I say, so long as it's at the C# compiler level rather than the JIT compiler level, unit tests should be fine. (I'm not seeing this with mcs on my machine, admittedly.) – Jon Skeet Nov 05 '17 at 15:05
  • Thanks again. Incidentally, the dotnetfiddle site has options to select ".NET 4.5" or "Roslyn 2.0" compilers. Both give the same result. – Joe Nov 05 '17 at 15:44
  • ... also I've sent a message to dotnetfiddle support asking them about their compiler version and options. – Joe Nov 05 '17 at 16:11
0

The Decimal.ToString() method outputs the number of decimal places that is determined from the structure's internal scaling factor. This factor can range from 0 to 28. You can obtain the information to determine this scaling factor by calling the Decimal.GetBits Method. This method's name is slightly misleading as it returns an array of four integer values that can be passed to the Decimal Constructor (Int32[]); the reason I mention this constructor is that its "Remarks" section of the documentation describes the the bit layout better than the documentation for the GetBits method.

Using this information you can determine the Decimal value's scale factor an thus know how many decimal places the default ToString method will yield. The following code demonstrates this as an extension method named "Scale". I also included an extension method named "ToStringMinScale" to format the Decimal to a minimum scale factor value. If the Decimal's scale factor is greater than the specified minimum, that value will be used.

internal static class DecimalExtensions
    {
    public static Int32 Scale(this decimal d)
        {
        Int32[] bits = decimal.GetBits(d);

        // From: Decimal Constructor (Int32[]) - Remarks
        // https://msdn.microsoft.com/en-us/library/t1de0ya1(v=vs.100).aspx

        // The binary representation of a Decimal number consists of a 1-bit sign, 
        // a 96-bit integer number, and a scaling factor used to divide 
        // the integer number and specify what portion of it is a decimal fraction. 
        // The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.

        // bits is a four-element long array of 32-bit signed integers.

        // bits [0], bits [1], and bits [2] contain the low, middle, and high 32 bits of the 96-bit integer number.

        // bits [3] contains the scale factor and sign, and consists of following parts:

        // Bits 0 to 15, the lower word, are unused and must be zero.

        // Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.

        // Bits 24 to 30 are unused and must be zero.

        // Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.

        // mask off bits 0 to 15
        Int32 masked = bits[3] & 0xF0000;
        // shift masked value 16 bits to the left to obtain the scaleFactor
        Int32 scaleFactor = masked >> 16;

        return scaleFactor;
        }

    public static string ToStringMinScale(this decimal d, Int32 minScale)
        {
        if (minScale < 0 || minScale > 28)
            {
            throw new ArgumentException("minScale must range from 0 to 28 (inclusive)");
            }
        Int32 scale = Math.Max(d.Scale(), minScale);
        return d.ToString("N" + scale.ToString());
        }

    }
TnTinMn
  • 11,522
  • 3
  • 18
  • 39
  • Thanks, but as I said in the question I'm looking for a solution without converting to a string. – Joe Nov 05 '17 at 19:06
  • @joe, Could you clarify what it is that you want to achieve? Is it setting the setting the Decimal's scale? If so, to what end? You say without converting to String, but that that is exactly what your code sample `Console.WriteLine("Result = " + d.ToString())` shows. – TnTinMn Nov 05 '17 at 19:22
  • The `Console.WriteLine` is simply to illustrate the result I want. In reality I will use the decimal for other arithmetical calculations before converting the final result to a string. What I want is, say, a conversion that ensures the output has at least 3 decimals, so 1.2 becomes 1.200, 1.23 becomes 1.230; 1.2345 becomes 1.2345 etc. – Joe Nov 05 '17 at 19:59
  • @joe, I edited my answer to include a method I use to change the Decimal's scale factor. However, after looking at the answer you linked to in response to InBetween, I do not understand why you are not doing something like this as you apparently understand how to change the scale factor and the subsequently the result of Decimal.ToString. – TnTinMn Nov 05 '17 at 22:58