Here 'a' should get printed as 0.7 < 0.7 is false, but 'c' is printed.
#include<stdio.h>
void main()
{
float a=0.7;
if(a<0.7)
printf("c");
else
printf("a");
}
Here 'a' should get printed as 0.7 < 0.7 is false, but 'c' is printed.
#include<stdio.h>
void main()
{
float a=0.7;
if(a<0.7)
printf("c");
else
printf("a");
}
You seem to misunderstand floating point numbers. See this question.
One thing you can do is think "Well, it will never be exactly 0.7, so maybe I can't judge for sure, but I can get close...
Then, you pick a granularity, say, one millionth. You can try comparing the integer rounded result of ie6 * a
to the integer rounded result of 1e6 * 0.7
to see not so much "is a < 0.7
?", but "is a
reasonably, close-enough, less than 0.7
?"
Or, just compare to the same type. As said in the comments, maybe 0.7
is not a float
literal but a double
literal. Ensure it's a float
literal to be sure, and 'a'
is printed.
void main()
{
float a=0.7;
if(a<0.7f)
printf("c");
else
printf("a");
}