Why in the output, the length of decimals differ in these two scenarios?
33 * .1 = 3.3000000000000003
where as
33 * .01 = 0.33
Any idea why its like that?
NB: 33
in the above calculation can be any integer
Why in the output, the length of decimals differ in these two scenarios?
33 * .1 = 3.3000000000000003
where as
33 * .01 = 0.33
Any idea why its like that?
NB: 33
in the above calculation can be any integer
When you convert .1 or 1/10 to base 2 (binary) you get a repeating pattern after the decimal point, just like trying to represent 1/3 in base 10. The value is not exact, and therefore you can't do exact math with it using normal floating point methods.
This is the reason why you end up answer like above.