I was recently setting up some data for a test case checking rounding errors on the float data type, and ran into some unexpected results. I expected that cases t2 and t3 would produce the same result as t1, but that is not the case on my machine. Can anyone tell me why?
I suspect the reason for the difference is that t2 and t3 are evaluated at compilation, but I'm surprised that the compiler completely ignores my attempts to force it to use an intermediate float data type during evaluation. Is there some part of the c# standard that mandates evaluating constants with the largest available data type, regardless of the one specified?
This is on a win7 64-bit intel machine running .net 4.5.2.
float temp_t1 = 1/(3.0f);
double t1 = (double)temp_t1;
const float temp_t2 = 1/(3.0f);
double t2 = (double)temp_t2;
double t3 = (double)(float)(1/(3.0f));
System.Console.WriteLine( t1 ); //prints 0.333333343267441
System.Console.WriteLine( t2 ); //prints 0.333333333333333
System.Console.WriteLine( t3 ); //prints 0.333333333333333