I am prompting the user to input a float number. I save the number in float variable and multiply it by 100 to make it integer. Only 2 decimal places are allowed so it is a fairly easy thing. Now the strange part :
- User Input : 0.1 -> Output : 100
- User Input : 1.1 -> Output : 110
- User Input : 1.5 -> Output : 150
- User Input : 2.1 -> Output : 209.999985
- User Input : 2.5 -> Output : 250
- User Input : 3.8 -> Output : 380
- User Input : 4.2 -> Output : 419.999969
- User Input : 5.6 -> Output : 560
- User Input : 6.0 -> Output : 600
- User Input : 7.5 -> Output : 750
- User Input : 8.1 -> Output : 810.000061
- User Input : 9.9 -> Output : 989.999969
I tried this thing only till 10.00.
Referring Why Are Floating Point Numbers Inaccurate? I got to know the reason behind this behavior, but isn't there any way to know which number would behave strangely?