I found this code snippet on Page 174, A Book on C -Al Kelley, Ira Pohl .
int main()
{
int cnt=0; double sum=0.0,x;
for( x=0.0 ;x!= 9.9 ;x+=0.1)
{
sum=sum +x;
printf("cnt = %5d\n",cnt++);
}
return 0;
}
and it became a infinite loop as the book said it would. It didnt mention the precise reason except saying that it had to do with the accuracy of the machine.
I modified the code to check if x=9.9 would ever become true, i.e. x was attaining 9.9 by adding the following lines
diff=x-9.9;
printf("cnt =10%d \a x =%10.10lf dif=%10.10lf \n",++cnt,x,diff);
and i got the following lines among the output
cnt =1098 x =9.7000000000 dif=-0.2000000000
cnt =1099 x =9.8000000000 dif=-0.1000000000
cnt =10100 x =9.9000000000 dif=-0.0000000000
cnt =10101 x =10.0000000000 dif=0.1000000000
cnt =10102 x =10.1000000000 dif=0.2000000000
if x is attaining the value 9.9 exactly , why is it still a infinite loop?