Possible Duplicate:
Why can't decimal numbers be represented exactly in binary?
I am developing a pretty simple algorithm for mathematics use under C++.
And I have a floating point variable named "step", each time I finish a while loop, I need step to be divided by 10.
So my code is kind of like this,
float step = 1;
while ( ... ){
//the codes
step /= 10;
}
In my stupid simple logic, that ends of well. step will be divided by 10, from 1 to 0.1, from 0.1 to 0.01.
But it didn't, instead something like 0.100000000001 appears. And I was like "What The Hell"
Can someone please help me with this. It's probably something about the data type itself that I don't fully understand. So if someone could explain further, it'll be appreciated.