#include <iostream>
using namespace std;
int main() {
float a = 6.2;
if (a < 6.2) {
cout << "Yes\n";
}
return 0;
}
This code prints yes, Can someone explain why?
#include <iostream>
using namespace std;
int main() {
float a = 6.2;
if (a < 6.2) {
cout << "Yes\n";
}
return 0;
}
This code prints yes, Can someone explain why?
This code is not required to act any particular way. Any fixed precision representation will sometimes get problems like this wrong.
For example, in six-digit, fixed-precision decimal, 1/3 + 1/3 + 1/3 will not be 1 because 1/3 can only be represented as "0.333333" and adding that three times will give "0.999999". Similarly, 1/3 + 1/3 will not be 2/3 because 2/3 is "0.666667".
Your system can't represent 6.2 exactly just as that one can't represent 1/3 exactly. So you get this kind of weirdness.
Here, the float representation of 62/10 and the double representation of 62/10 aren't equal. Likely, the double representation is closer to the correct value than the float one. The same would be true of 1/3 in decimal. Low-precision might use "0.333333" and high-precision might use "0.333333333333". Those aren't equal. Notice one of them is smaller than the other.
There is no rule that two different approximations of 6.2 must be equal. Why would they have to be?