I recently came across a weird case in an ETL process where the results seem unpredictable to me. I read Difference between numeric, float and decimal in SQL Server, but I don't think it's an overflow or decimal precision issue.
Scenario:
Source table "test" in SQL Server 2008 SP3, column a declared as numeric (38,6).
The result is cast first to real, and then to int. The issue doesn't occur if there is a direct cast from numeric to int.
Results of:
SELECT a,CAST(a as real) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
- a: 778881838.810000
- real_a: 7.78819E+08
- int_a: 778881856
The same experiment, run in SQL Server 2017 (sql fiddle) gives this: http://sqlfiddle.com/#!18/45aca/2
a: 778881838.81
real_a: 778881860
int_a: 778881856
I can (vaguely) understand the ..19E+08 case, but why is there a +18 difference in the double conversion case? The number seems completely arbitrary to me.