Code:
const float A = 0.1f;
float a = A;
Console.WriteLine(a == A);
Console.WriteLine(a * 10 == A * 10);
Output:
True
False
Code:
const double A = 0.1d;
double a = A;
Console.WriteLine(a == A);
Console.WriteLine(a * 10 == A * 10);
Output is
True
True
My environment is .NET Framework 4.8 32bit Console Application. I wonder why a * 10 == A * 10 is false when a variable type is float. I do not think this is related floating-point arithmetic accuracy problem because I assigned the same value(0.1f). Could you explain?