9

I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:

Determine the decimal precision of an input number

First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.

With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.

I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:

[TestMethod]
public void ScaleAndPrecisionTest()
{
    //arrange 
    var number = 12345.67890M;

    //act
    var scale = number.Scale();
    var precision = number.Precision();

    //assert
    Assert.IsTrue(precision == 10);
    Assert.IsTrue(scale == 5);
}

but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.

Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.

Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?

Racil Hilan
  • 24,690
  • 13
  • 50
  • 55
Jeremy Holovacs
  • 22,480
  • 33
  • 117
  • 254
  • 2
    Can you explain why you believe the precision is 10? There are 11 digits in that number, with two trailing zeros, leaving 9 obviously significant ones. I'm not seeing where 10 comes from. – Ben Voigt Nov 03 '15 at 02:27
  • 2
    Wouldn't your edit also change the scale? – Ben Voigt Nov 03 '15 at 02:32
  • ...yes. Yes it would. I suck. Fixed it – Jeremy Holovacs Nov 03 '15 at 02:34
  • Can one of you guys explain to me how hard it is to write a function that counts the digits exactly like you're counting it manually? And what is the benefit of such functions anyway? – Racil Hilan Nov 03 '15 at 02:36
  • @RonBeyer, in a decimal data type in c#, trailing zeroes are not insignificant. For example, `string.Format("{0}", 12345.67890M)` yields 11 characters. – Jeremy Holovacs Nov 03 '15 at 02:42
  • @RacilHilan Counting the digits requires you to convert a decimal value to a string. That has a smell to it that I really don't like. – Jeremy Holovacs Nov 03 '15 at 02:45
  • @JeremyHolovacs You are right, they are stored with the `Decimal` data type. – Ron Beyer Nov 03 '15 at 02:45
  • 3
    @JeremyHolovacs While I agree with you about the smell of converting it to string, there is no "mathematical" ways that gives you those answers, simply because they're not mathematical concepts. Those concepts are storage concepts. For instance, you said to Ron that "in C#, trailing zeroes are not insignificant". Well, can we agree that they are insignificant in math? And if we agree on that, then math can simply NOT give you the answers you want. And I'm still wondering about the benefit of those functions anyway. – Racil Hilan Nov 03 '15 at 02:55
  • @RacilHilan, the math I am referring to is the mathematical operations necessary to perform on the array of byte arrays which hold the value of the decimal... not on the number itself, per se, but indeed on the value stored within the `Decimal` class. Certainly the trailing zeroes are not mathematically significant in terms of how one would use a `decimal` type... unless validating scale and precision. – Jeremy Holovacs Nov 03 '15 at 03:00
  • I see. You should've stated that in your question, but I agree with you that this seems a very logical way of solving the problem. – Racil Hilan Nov 03 '15 at 03:02
  • @RacilHilan, it should be noted that "significance" isn't a mathematical concept, really. It relates more to applications wherein significant digits are determined by measurement accuracy. In all such applications, trailing zeroes can _certainly_ be significant. – Marc L. Apr 17 '20 at 19:36
  • @MarcL. I never said "significance" was a mathematical concept. What I was saying is that the numbers 1.1 and 1.1000 (for example) are equal and behave the same in all math operations. Trailing zeros don't add anything in math, thus they are insignificant in the sense that removing them doesn't change anything in math. – Racil Hilan Apr 18 '20 at 20:51

3 Answers3

9

This is how you get the scale using the GetBits() function:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F); 

And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false =  positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;

Now we can put them into extensions:

public static class Extensions{
    public static int GetScale(this decimal value){
    if(value == 0)
            return 0;
    int[] bits = decimal.GetBits(value);
    return (int) ((bits[3] >> 16) & 0x7F); 
    }

    public static int GetPrecision(this decimal value){
    if(value == 0)
        return 0;
    int[] bits = decimal.GetBits(value);
    //We will use false for the sign (false =  positive), because we don't care about it.
    //We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
    decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
    return (int)Math.Floor(Math.Log10((double)d)) + 1;
    }
}

And here is a fiddle.

Racil Hilan
  • 24,690
  • 13
  • 50
  • 55
  • @ivan_pozdeev I was trying to figure out the best way to get the precision. I've added links to the MSDN. Sorry, I don't see a real point in reiterating what's perfectly explained by the MSDN, but you did a good job at that. But what error are you referring to? – Racil Hilan Nov 03 '15 at 06:47
  • Excellent. This seems to work for all the edge cases I can throw at it. – Jeremy Holovacs Nov 03 '15 at 11:42
  • 1
    @RacilHilan The MSDN's explanation wasn't good enough for me, so I did what I think is a better job. As for the error, it's `&0x7F` instead of `&0xFF` (bits 16 through 23 is 8 bits). This won't manifest itself _in normal circumstances_ since the highest legal value is 28 but it's still a breach of spec which is akin to laying a land mine for someone to eventually step on (e.g. it will cause a discrepancy between your and .NET's code if you somehow get (e.g. deserialize) a malformed Decimal). If you think there's _any_ way to figure this out from your answer, cast a stone at me. – ivan_pozdeev Nov 03 '15 at 18:57
  • @ivan_pozdeev I'm not in the stones business. :-) I did say **you did a good job**, didn't I? As for the `&0x7F`, your point regarding the specs is valid, but it is copy/paste from the MSDN. If you think it is wrong, you can leave a comment on that page so they fix it. However, I used it in function that is returning the scale as an integer which is completely safe for serlialization or any other usage. – Racil Hilan Nov 03 '15 at 19:19
  • For the scale, you have to reconstruct the decimal too if you don't want the trailling zero. – Marco Guignard Nov 08 '16 at 10:48
  • 1
    0.000m == 0 is true so short circuiting on zero needs to be removed. – Dominik Holland Apr 12 '19 at 15:47
1

First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.


Now, there are 2 fundamental ways to determine each digit (and thus, their number):

  • get+interpret the meaningful parts
  • calculate mathematically

The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.

For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().

  1. ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.

  2. The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:

    public static int[] GetBits(decimal d)
    {
        return new int[]
        {
            d.lo,
            d.mid,
            d.hi,
            d.flags
        };
    }
    

    And their semantics are:

    • |high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
    • flags:
      • bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
        • (thus (flags>>16)&0xFF is the raw value of this field)
      • bit 31 - sign (doesn't concern us)

    as you can see, this is very similar to IEEE 754 floats.

    So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.

Community
  • 1
  • 1
ivan_pozdeev
  • 33,874
  • 19
  • 107
  • 152
  • This does not answer the question, and pretty much ignores the question as well. – Jeremy Holovacs Nov 03 '15 at 06:06
  • 2
    My goal is not to just give a "try this"-code "answer" but demonstrate how to produce one so you understand what and why you're doing. The gist of each method is highlighted in bold. They don't reference "scale" and "precision" directly because, as the lead section says, these terms may mean different things depending on your application. – ivan_pozdeev Nov 03 '15 at 06:15
-2

Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.

In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.

What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.

public struct DecimalInfo
{
    public int Scale;
    public int Length;

    public override string ToString()
    {
        return string.Format("Scale={0}, Length={1}", Scale, Length);
    }
}

public static class Extensions
{
    public static DecimalInfo GetInfo(this decimal value)
    {
        string decStr = value.ToString().Replace("-", "");
        int decpos = decStr.IndexOf(".");
        int length = decStr.Length - (decpos < 0 ? 0 : 1);
        int scale = decpos < 0 ? 0 : length - decpos;
        return new DecimalInfo { Scale = scale, Length = length };
    }
}
Corey
  • 15,524
  • 2
  • 35
  • 68