This is a question related to both Entity Framework and SQL Server (2016).
I've noticed some strange behaviour in SQL server when summing a decimal column with a lower scale & precision, and unioning that with another decimal column with higher scale & precision. The resulting values all have the lower scale & precision.
This doesn't seem correct, as this page suggests that when UNIONing, the precision will be adjusted to be the widest necessary to accommodate it.
The problem can be easily seen with this example:
create table low_scale (val decimal(13,2))
create table high_scale (val decimal(19,8))
insert into low_scale values (10.00), (2.23)
insert into high_scale values (0.0000002), (2.02302023)
-- Query 1: Works fine - Result is at the widest precision required to accomodate the two types:
select * from low_scale
union all
select * from high_scale
-- Query 2: Strange - result is rounded to two decimal places:
select sum(val) from low_scale
union all
select sum(val) from high_scale
As you'd expect, the result from Query 1 is:
10.00000000
2.23000000
0.00000020
2.02302023
However, this is the result from Query 2:
12.23
2.02
It appears I can work around this by first casting the lower-precision column to a higher precision, like so:
-- Result is at the expected 8 decimal places:
select sum(CAST(val as decimal(19,8))) from low_scale
union all
select sum(val) from high_scale
However, I'm using Entity Framework 6, and don't have much control over what SQL gets generated. Is there any way to force Entity Framework to cast to a higher precision when doing the SUM, or some other way to ensure the correct behavior?
Edit: I'm not sure why this is being flagged as a duplicate of a question about floating points. This has nothing to do with floating points - it is using decimal - a fixed-point data type.