Debug.Log((int)(4.2f * 10.0f));
Result : 41 (in Unity 2017.2.1p4)
Result : 42 (in Unity 5.4.3p4)
why it differs??
Debug.Log((int)(4.2f * 10.0f));
Result : 41 (in Unity 2017.2.1p4)
Result : 42 (in Unity 5.4.3p4)
why it differs??
Floating point numbers use an approximate representation and floatng math is not deterministic. The same calculation can give different results, particularly on different computers, architectures, or compiler and optimization settings.
Thus, it's likely that there is a difference in compiler and optimization settings between Unity 5.4.3p4 and Unity 2017.2.1p4.
In your example, 4.2f * 10.0f
can result in different values like 41.9999998f
or 42.0000002f
, which is casted to integer as 41 or 42 respectively.
For more information, see these posts:
The exact result of product of (single precision) float 4.2f
and 10.f
is
41.9999980926513671875
The nearest float to this exact result is 42.0f
But this exact result fits in a double precision, so if ever the expression is evaluated in single precision it will print 42, but if it is evaluated in double precision, then it will print 41.
IOW, the expression can be decomposed into:
x = 4.2f;
x = x * 10.0f;
Debug.Log((int)(x));
If the compiler decided to preserve a single precision x, it will print 42, but if it decided to evaluate x in double precision, it will print 41.