4

I have a variable of type decimal whose value is 1.0. I save it into a column in an SQL Server 2012 table, whose type is decimal(10, 8).

After retrieving the value, I see it is 1, but when I call ToString(), the value returned is "1.00000000" (see below).

I realise that the 8 decimal places correspond to the data type in the database. However, there are no attributes or anything in the Entity Framework-generated property that give it this kind of behaviour, so I have no idea how this is happening.

Here are a few tests I've carried out in the Immediate Window:

myDecimal
1
myDecimal.ToString()
"1.00000000"
myDecimal == 1
true
myDecimal == 1.0m
true

As you can see from the last 2 tests, it isn't a case of floating-point error either (not that I was expecting it since decimal is fixed-point, but I had to try it since I ran out of ideas).

Any idea how the decimal's ToString() is resulting in a string with 8 decimal places?

Edit: For comparison, look at the test below.

1m.ToString()
"1"
Gigi
  • 28,163
  • 29
  • 106
  • 188
  • Can you show your model class containing this property ? – Ofiris Dec 03 '14 at 09:33
  • In your DB column change the datatype from `decimal(10, 8)` to `decimal(10, 1)` – Rajesh Dec 03 '14 at 09:35
  • possible duplicate of [c# - How do I round a decimal value to 2 decimal places (for output on a page)](http://stackoverflow.com/questions/164926/c-sharp-how-do-i-round-a-decimal-value-to-2-decimal-places-for-output-on-a-pa) – Tanner Dec 03 '14 at 09:35
  • @Rajesh and what if there's values like `1.1234` in there? – Tanner Dec 03 '14 at 09:36
  • @Tanner You are right on that case we may follow the Answer from the question which you had specified – Rajesh Dec 03 '14 at 09:38
  • 2
    @Tanner It's not a duplicate; I know how to format a string if I want to... I'm asking how there is this default formatting with 8 decimal places. – Gigi Dec 03 '14 at 09:41
  • 2
    @Gigi because your column definition is specifying a precision of 8 decimal places, if they weren't all zeros then you would be losing accuracy by removing them. Is c# supposed to know that you want `1.00000000` to be represented as `1.0` without you telling it? If it was `1.12345678` would you expect c# to return `1.1` by default? – Tanner Dec 03 '14 at 09:58
  • @Tanner They *are* all zeros! A value of 1 is being returned in the C# decimal. What I asked is: why is this being formatted to 8 decimal places despite the value that has no decimal places? – Gigi Dec 03 '14 at 10:00
  • 2
    @Gigi It *does* have decimal places. For example, 1.000 represents something different from 1.0: 1.000 could be 1 +/- 0.0005 while 1.0 could be 1 +/- 0.05. – Andrew Morton Dec 03 '14 at 10:12
  • @Gigi in c# when you're debugging it is evaluating it and recognizing that there are no values after the point and being a numeric value, returns it effectively as an integer. But when you call ToString on it, it takes into account the underlying precision and datatype to give you an accurate value. – Tanner Dec 03 '14 at 10:14
  • When you use `decimal.ToString()` it will use current culture. Perhaps (wild guess) something changes how `decimal` is presented in current culture. – Sinatr Dec 03 '14 at 10:18
  • @AndrewMorton `1.0m == 1.000m` gives `true`. (Same for floating-point numbers) – Gigi Dec 03 '14 at 10:21
  • Does other `decimal`s also have `F8` format? Try `((decimal)1).ToString()` under same conditions, what is the result? – Sinatr Dec 03 '14 at 10:24
  • @Sinatr the result is `"1"`. It's basically the same as the first test I did in my question. – Gigi Dec 03 '14 at 10:26
  • 1
    @Gigi please just evaluate `yourDecimal.GetType()` to get all of the underlying information about your decimal value and realize that it's not just storing a number, even though that's all the immediate window is showing you when you say, give me the value. – Tanner Dec 03 '14 at 10:27
  • 1
    @Tanner please realise that my question is precisely about how the decimal ("not just a number") is being transformed into a specifically formatted string in this case, and that thus my question cannot be a duplicate of the question you specified. – Gigi Dec 03 '14 at 10:29
  • @Gigi I'm not referring to the duplicate I posted, I'm simply trying to highlight there are underlying properties on decimal values that are used to maintain precision when you call `.ToString()`. Evaluation in the immediate window simply returns a number with minimal precision if there is nothing after the decimal place. If your value was `1.1234` it would return that as there are significant values stored in the decimal. If you want `.ToString()` to return a different value, then you have to specify a format otherwise it will take the underlying value with it's full precision. – Tanner Dec 03 '14 at 10:56

4 Answers4

9

The reason is that the decimal type is not normalized. There are multiple representations for the same number, and those will be represented as different strings.

This is not a special property of your database type, this is how decimal works normally. There is no special DataAnotation or anything attached to the variable.

(1m).ToString() == "1"
(1.00000000m).ToString() == "1.00000000"
((1m)==(1.00000000m)) == true

For a given double, there is only one valid representation, i.e. one combination of mantissa * 2exponent

For decimal, there are multiple valid representations of mantissa * 10exponent. Each represents the same number, but the additional information available through the multiple possible representations is used to select the default number of trailing digits when converting the decimal to a string. The exact details are not really well-documented, and I found no information on what exactly happens with the exponent when decimals are added or multiplied. But the effect it has on ToString() is easy to verify.

The drawback is that the Equals() and GetHashCode() operations are more complicated than for a normalized number format, and there have been subtle bugs in the implementation: C# Why can equal decimals produce unequal hash values?

This article by Jon Skeet goes into a bit more detail:

A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints. [...]

The decimal type doesn't normalize itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit.

You can verify that the two decimals you have are not identical by comparing the values returned by decimal.GetBits(), i.e.:

decimal.GetBits(1m) == {int[4]}
    [0]: 1
    [1]: 0
    [2]: 0
    [3]: 0

decimal.GetBits(1.00000000m) == {int[4]}
    [0]: 100000000
    [1]: 0
    [2]: 0
    [3]: 524288

It may be tempting to rely on this behaviour for formatting your decimals, but I would recommend always explicitly selecting a precision when converting to string, to avoid confusion and unforeseen surprises for example if the number is multiplied by a certain factor beforehand.

Community
  • 1
  • 1
HugoRune
  • 13,157
  • 7
  • 69
  • 144
  • `(1.00000000m).ToString() == "1.00000000"` - this should be the answer tbh, rest is easy to conclude. – Sinatr Dec 03 '14 at 10:36
  • @Sinatr It obviously hasn't been, so far. But I certainly don't mind if this is the answer. I was merely explaining how a decimal works in a more consequential way, rather than technical. =) – J. Steen Dec 03 '14 at 10:39
  • The article by Jon Skeet goes into great detail, and even explains the behaviour for `ToString()` used to be different in .NET 1.0, but doesn't offer the rationale for the change - do you, or does anyone else, know why the behavior was chosen for 1.1 and beyond? – Edouard Poor Nov 25 '17 at 09:28
5

When Entity Framework is retrieving the values from the returned result of the query, it uses your EDMX definition or the data annotations to know what precision to set on the decimal.

Since you have used decimal(10,8), Entity Framework will set your decimal to a value of 1.00000000. The implementation of ToString in a decimal will respect that precision and output all of the zeros as they are considered to be significant.

The type used is still a decimal. You specify precision for a decimal by giving it a precise value: 1.000m is more precise than 1.0m. This is just how a decimal works and is (briefly) mentioned here at the bottom.

ToString has no idea to know that you don't consider the zeros to be significant until you tell it to. Zero is still a value.

J. Steen
  • 15,470
  • 15
  • 56
  • 63
  • 1
    Could you elaborate that part `The implementation of ToString in a decimal will respect that precision` please? I fail to find where can I specify that precision. And I fail to understand what technique EF is using to achieve that. Author specified, that type is `decimal`, is it `System.Decimal` or some other type? – Sinatr Dec 03 '14 at 10:31
  • Is that enough of a brief elaboration, or...? [This answer](http://stackoverflow.com/a/27268996/64976) offers a more in depth technical explanation of how values and precision are stored in a decimal. – J. Steen Dec 03 '14 at 10:38
1

If you are using database first EntityFramework Keeps the column propery in edmx file in xml format as...

<Property Name="ColumnName" Type="Decimal" Precision="8" Scale="4" Nullable="false" />

The edmx file provides information about the Property just like DataAnotation. In your case the Column is marked with the Precision="8". So When you call ToString() method in the column, the field is formatted accordingly.

you can format the resulting string using any of Standard Numeric Format Strings http://msdn.microsoft.com/en-us/library/dwhawy9k(v=vs.110).aspx or you can use Custom Numeric Format Strings http://msdn.microsoft.com/en-us/library/0c899ak8(v=vs.110).aspx

for eg.

myDecimal.ToString("N") //1
Teddy
  • 304
  • 5
  • 17
  • My question was about the C# decimal's behaviour in this scenario. – Gigi Dec 03 '14 at 09:57
  • the edmx file provides information about the Property just like DataAnotation. In your case the Column is marked with the Precision="8". So When you call ToString() method in the column, the field is formatted accordingly. – Teddy Dec 03 '14 at 10:02
  • Ah, finally something that makes sense. Can you edit that into your answer? I can't change my vote otherwise. – Gigi Dec 03 '14 at 10:04
  • 1
    I don't get how this answer the question. How can xaml affect `decimal.ToString()`? Is `myDecimal` some other type than `decimal` ? – Sinatr Dec 03 '14 at 10:20
0

You are right, decimal(10,8) tells the DB to send back 8 decimal points.

If you assign the result to a numeric type it is just that, a number, and without padding it yourself will be the minimal representation.

When you do ToString on the database object it is outputting exactly what was sent back from the SQL server, which in this case is a 1 with 8 zeros.

You can confirm this by querying the db server directly.

RobC
  • 502
  • 4
  • 17
  • But the C# decimal variable is a decimal like any other. What makes it format itself this way in this case? – Gigi Dec 03 '14 at 09:47
  • If it came from the DB it is a DB object, not a decimal. When you print out the value you get a conversion to an actual decimal number. The ToString puts exactly what came from the DB. Try with sqlcmd on the command line to see how it comes back from the DB. – RobC Dec 03 '14 at 10:12
  • I know how it came back from the DB. The point is that my variable is a plain C# decimal, so however it came back from the DB doesn't explain this behaviour. – Gigi Dec 03 '14 at 10:13