In Unity 3D, setting 0.45f
as a variable gives a different result than writing it inline:
float val = 0.45f; // THIS IS NOT A DOUBLE in my test case
int result = (int)(val * 100); //output : 44
Versus:
int result = (int)(0.45f * 100); // output : 45
The first case gives 44 while the second case gives 45. Is this legal in C#? I would have thought the compiler would have to truncate the precision of the variable due to the "f" suffix, even when it is written inline.
Alternative explanation:
Note that the compiler is not necessarily upgrading the precision. Rather, it could be choosing a lower value for 0.45f
in the first case, and choosing a higher value in the second case. Is that legal for a C# compiler?
Third explanation:
The compiler could use different floating point calculations than the runtime. So in the second case, the compiler could decide the answer is 45 based on one legal value of "0.45f", and the runtime could choose a different answer based on a different legal value of "0.45f". Is that legal?