7

EF6 appears to be inconsistent in how it handles rounding when multiplying and adding integer columns with decimal values.

// CREATE TABLE MyTable (MyIntValue INT NOT NULL)
// INSERT INTO MyTable (MyIntValue) VALUES (10)
const int IntScale = 5;
const decimal DecimalScale = 5;
const decimal DecimalScale2 = 5.0m;
context.Set<MyTable>()
    .Select(row => new
    {
         WithFloats = 0.5f + (row.MyIntValue * 5.0f),                         // 50.5
         WithDecimals = 0.5m + (row.MyIntValue * 5.0m),                       // 51 
         WithDecimals2 = 0.5m + ((decimal)row.MyIntValue * 5.0m),             // 50.5            
         WithDecimals3 = 0.5m + ((decimal)row.MyIntValue * IntScale),         // 51
         WithDecimals4 = 0.5m + ((decimal)row.MyIntValue * (decimal)IntScale) // 51
         WithDecimals5 = 0.5m + ((decimal)row.MyIntValue * DecimalScale)      // 51
         WithDecimals6 = 0.5m + ((decimal)row.MyIntValue * DecimalScale2)     // 50.5
    })
    .Single();

Surely this isn't the expected/correct behaviour? I would expect the WithDecimals value to be 50.5 (not 51). Am I overlooking something simple? How can I ensure that WithoutDecimals doesn't get rounded without changing the type of the other constants?

The generated SQL for WithFloats and WithDecimals (respectively):

,CAST(0.5 AS REAL) + (CAST(MyIntValue AS REAL) * CAST(5 AS REAL)) AS WithFloats
,0.5 + (CAST(MyIntValue AS DECIMAL(19,0)) * CAST(5 AS DECIMAL(18))) AS WithDecimals
mindlessgoods
  • 1,135
  • 1
  • 11
  • 22
  • EF maps System.Decimal to DECIMAL(18, 0) by default, so your WithDecimals value gets rounded to the nearest integer. I believe there's a way to change that default behaviour, see e.g. this comment: http://stackoverflow.com/a/27418286/189572 – Max Dec 23 '15 at 02:32
  • @Max: Adding a default decimal precision appears to have no effect. Furthermore, if you were to add another projected column with the expression: (0.5m + row.MyIntValue) the result would be 10.5, so the behaviour wouldn't appear to be caused by a default decimal precision. – mindlessgoods Dec 23 '15 at 02:47
  • Interesting. Looks like the cast of row.MyIntValue to DECIMAL(19,0) is the problem, it works with a cast to DECIMAL(18). What happens when you cast it to decimal in C#? ```WithDecimals = 0.5m + ((decimal)row.MyIntValue * 5.0m)``` – Max Dec 23 '15 at 03:00
  • Also fixes itself if you use `CAST(0.5 AS REAL)`. Though I fail to see how it can be a precision issue (`decimal(18)` should be plenty) to essentially evaluate `0.5 + 50`. `CAST(10 AS DECIMAL(18,1)` also works. – Rob Dec 23 '15 at 03:54
  • @Max: Casting the int column to appears to result in the correct output, though this only works for columns and not constants. I've updated the original question to reflect this. This still seems very inconsistent to me. – mindlessgoods Dec 23 '15 at 18:35
  • @Rob I believe decimal(18) is equivalent to decimal(18,0), i.e. it has a precision of 18 but a scale (the number of digits to the right of the decimal point ) of 0. That's why 0.5+50 gets rounded to 51. – Max Dec 23 '15 at 18:59
  • @Max You're right, though now I'm curious as to why a `decimal` defaults to essentially behaving as an `int` – Rob Dec 23 '15 at 22:15

0 Answers0