Floating point is weird in C language. I know it is generally used as (float)a/b. However, I am curious what is the main cause and reason for the code phenomenon below.
#include <stdio.h>
int main(void)
{
int a=30, b=16;
double divresult;
divresult = a/b;
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("result1 : %f \n", a/b);
printf("divresult : %f \n",divresult);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
printf("result2 : %f \n", a/b);
return 0;
}
output here
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
result1 : 0.000000
divresult : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
result2 : 1.000000
Why is this happening? I can't really understand. In general, we know that computers cannot accurately represent floating point numbers. But in this case, there seems to be some pattern. How does the change of values before and after "divresult printf" appear?