I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.
I know float number (e.g. 10.01F
) has a precision of 6 to 9 digits. So, let's say we have the next code:
float myFloat = 1.000001F;
Console.WriteLine(myFloat);
I get the exact number in console. Now, let's use the next code:
myFloat = 1.00000006F;
Console.WriteLine(myFloat);
A different number is printed: 1.0000001
, even thought the number has 9 digits, which is the limit.
This is my first doubt. Does precision depends of the number itself or the computer's architecture?
Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:
(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)
My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.
E.g.
float myFloat = 999999.11F + 1.11F;
The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25
There is a 0.03 difference. In order to see if the actual result is 1000000.22
I did the next condition:
if (myFloat == 1000000.22F) {
Console.WriteLine("Real result = 100000.22");
}
And it actually prints it: Real result = 100000.22
.
So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?