-1

I have some doubts about what "precision" actually means in C# when working with floating numbers. I apologize in advance if a logic is weak and for the long explanation.

I know float number (e.g. 10.01F) has a precision of 6 to 9 digits. So, let's say we have the next code:

float myFloat = 1.000001F;
Console.WriteLine(myFloat);

I get the exact number in console. Now, let's use the next code:

myFloat = 1.00000006F;
Console.WriteLine(myFloat);

A different number is printed: 1.0000001, even thought the number has 9 digits, which is the limit.

This is my first doubt. Does precision depends of the number itself or the computer's architecture?

Furthermore, data is store as bits in the computer, bearing that in mid, I remember that converting the decimal part of a number to bits can lead to a different number when transforming the number back to decimal. For example:

(Decimal) 1.0001 -> (Binary) 1.00000000000001101001
(Binary) 1.00000000000001101001 -> (Decimal) 1.00010013580322265625 (It's not the same)

My logic after this is: maybe a float number doesn't lose information when stored, maybe such information is lost when the number is converted back to decimal to show it to the user.

E.g.

float myFloat = 999999.11F + 1.11F;

The result of the above should be: 1000000.22. However, since this number exceeds the precision of a float, I should see a different number, which indeed happens: 1000000.25

There is a 0.03 difference. In order to see if the actual result is 1000000.22 I did the next condition:

if (myFloat == 1000000.22F) {
       Console.WriteLine("Real result = 100000.22");
}

And it actually prints it: Real result = 100000.22.

So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?

DamianGDO
  • 449
  • 4
  • 8
  • 3
    [It is non-sensical and wrong to say the C# `float` type has a precision of 6-9 digits.](https://stackoverflow.com/questions/61609276/how-to-calculate-float-type-precision-and-does-it-make-sense/61614323#61614323) Forget you ever heard or saw that. The C# `float` and `double` types effectively represent numbers as integers scaled by powers of two. All of the numbers in your examples are converted to multiples of powers of two. – Eric Postpischil Jun 17 '21 at 20:30
  • 1
    See https://en.wikipedia.org/wiki/IEEE_754 – Matt Johnson-Pint Jun 17 '21 at 20:57

2 Answers2

2

1.000001F in source code is converted to the float value 8,388,616•2−23, which is 1.00000095367431640625.

1.00000006F in source code is converted to the float value 8,388,609•2−23, which is 1.00000011920928955078125.

Console.WriteLine shows only some of the value of these; it rounds its display to a limited number of digits, by default.

999999.11F is converted to 15,999,986•2−4 which is 999,999.125. 1.11F is converted to 9,311,355•2−23, which is 1.11000001430511474609375. When these are added using real-number mathematics, the result is 8,388,609,971,323•2−23. That cannot be represented in a float, because the fraction portion of a float (called the significand) can only have 24 bits, so its maximum value as an integer is 16,777,215. If we divide that significand by 219 to reduce it to that limit, we get approximately 8,388,609,971,323/219 • 2−23•219 = 16,000,003.76•2−4. Rounding that significand to an integer produces 16,000,004•2−4. So, when those two numbers are added, float arithmetic rounds the result and produces 16,000,004•2−4, which is 1,000,000.25.

So... the information loss occurs when converting the bits back to decimal? or it also happens in the lower levels of computing and my example was just a coincidence?

Converting a decimal numeral to floating-point generally introduces a rounding error.

Adding floating-point numbers generally introduces a rounding error.

Converting a floating-point number to a decimal numeral with limited precision generally introduces a rounding error.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
0

The rounding occurs both when you write 1000000.22F in your code (the compiler must find the exponent and mantissa that give a result closest to the decimal number to typed), and again when converting to decimal to display.

There isn't any decimal/binary type of rounding in the actual arithmetic operations, although arithmetic operations do have rounding error related to the limited number of mantissa bits.

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720