I've encountered a float calculation precision problem in C#, here is the minimal working example :
int num = 160;
float test = 1.3f;
float result = num * test;
int result_1 = (int)result;
int result_2 = (int)(num * test);
int result_3 = (int)(float)(num * test);
Console.WriteLine("{0} {1} {2} {3}", result, result_1, result_2, result_3);
The code above will output "208 208 207 208", could someone explain something on the weird value of result_2
which should be 208
?
(binary can not represent 1.3 precisely which will cause float precision problem, but I'm curious on the details)