case 1
float a = 0.6;
if (a < 0.6)
{
printf("c");
}
else
{
printf("c#");
}
output c#
case 2
float a = 0.9;
if (a < 0.9)
{
printf("c");
}
else
{
printf("c#");
}
output c
now question is why ?
case 1
float a = 0.6;
if (a < 0.6)
{
printf("c");
}
else
{
printf("c#");
}
output c#
case 2
float a = 0.9;
if (a < 0.9)
{
printf("c");
}
else
{
printf("c#");
}
output c
now question is why ?
I'm assuming float is IEEE 754 32-bit binary, and double is IEEE 754 64-bit binary.
The closest double to 0.6, the actual value of the literal, is 0.59999999999999997779553950749686919152736663818359375. The result of converting it to float is 0.60000002384185791015625, slightly bigger.
The closest double to 0.9 is 0.90000000000000002220446049250313080847263336181640625. The result of converting it to float is 0.89999997615814208984375, slightly smaller.
In each case, a decimal fraction that cannot be represented exactly is rounded to the nearest double to represent the literal. It is rounded to a float for assignment to a
, which under round-to-nearest rules may be slightly smaller or slightly greater than the double, or even could be exactly the same if the double's binary representation had a lot of trailing zeros.
The short answer: floating point numbers can't be represented exactly with a binary notation, even if they have a terminating definition in decimal. That means that comparing a float to another, supposedly equal float, isn't guaranteed to go either way, everything depends on the architecture and the representation of floats on that architecture.
Change the whole thing to only use single literals, and it should always work
float a = 0.6f;
if (a < 0.6f)
{
printf("c");
}
else
{
printf("c#");
}
The error actually has nothing to do with accuracy problems, and everything to do with type promotion. This is sorta equivalent to shoving 300
into a char
, and then comparing the result with the real integer 300
. When you first shoved it in, the value got truncated to fit in the smaller type, and during the comparison it got promoted back to the bigger type.
Edit
The accuracy problems that everyone is talking about here, are a different phenomenon. You can see it manifest with the boolean expression (4.0*3.0 == 2.0*6.0)
clearly both terms are 12.0
but the different truncation of 3.0
and 6.0
can make these two arithmetic expressions differ. If however you wrote the expression (3.0*5.0 == 3.0*5.0)
this is always guaranteed to be true for any conforming processor.(N.B. for many processors including intel, you can manipulate the configurations so they don't conform to the IEEE floating-point standard)