I know that floating point variable stores the number in a sign-exponent-fraction format (as it's stated in the IEEE 754), it's never precise and I should probably never compare two floats without specifying the precision.
But why exactly 0.09f - 0.01f gives you the value of 0.0800000057f? What exactly happens in under the hood of the .NET VM and in the memory when I do that subtraction?