So I know computers aren't especially 'good' at handling floats in many languages. I've seen 0.1 + 0.2
fail quite a few times in different languages. I've also learned that compilers can optimize a bit by evaluating some expressions during compilation. Knowing all that, I ran a little experiment.
Here's some C code I wrote:
#include <stdio.h>
float g1 = 0.1;
float g2 = 0.2;
int main() {
float l1 = 0.1;
float l2 = 0.2;
printf("a: %.50f\n", 0.3);
printf("b: %.50f\n", 0.1 + 0.2);
printf("c: %.50f\n", (0.1 * 10 + 0.2 * 10) / 10);
printf("d: %.50f\n", l1 + l2);
printf("e: %.50f\n", (l1 * 10 + l2 * 10) / 10);
printf("f: %.50f\n", g1 + g2);
printf("g: %.50f\n", (g1 * 10 + g2 * 10) / 10);
return 0;
}
Here's its output:
a: 0.29999999999999998889776975374843459576368331909180
b: 0.30000000000000004440892098500626161694526672363281
c: 0.29999999999999998889776975374843459576368331909180
d: 0.30000001192092895507812500000000000000000000000000
e: 0.30000001192092895507812500000000000000000000000000
f: 0.30000001192092895507812500000000000000000000000000
g: 0.30000001192092895507812500000000000000000000000000
It makes complete sense to be that "d," "e," "f" and "g" have the same result. I think "a," "b" and "c" being different from "d," "e," "f" and "g" is because of the difference between compile-time and run-time evaluation. However, I find it strange that "a" and "c" are the same but "b" is different.
Are my current understandings correct? Why are "a" and "c" the same while "b" is different?