14

Say I have:

float a = 3            // (gdb) p/f a   = 3
float b = 299792458    // (gdb) p/f b   = 299792448

then

float sum = a + b      // (gdb) p/f sum = 299792448

I think it has something to do with the mantissa shifting around. Can someone explain exactly what's going on? 32bit

mharris7190
  • 1,334
  • 3
  • 20
  • 36
  • You might want to try changing from float to double, if this is a limits-of-precision problem. Remember, floats round off; if you don't want that behavior, stick with ints or longs or use one of the extended-precision packages. – keshlam Mar 05 '14 at 01:21
  • So, I'm asking about the mechanics of the rounding off – mharris7190 Mar 05 '14 at 01:22
  • 1
    Imagine floats were ten-based, and mantissa is just three digits. 99900 is then 999*10^2. Now add 3: 99903. But mantissa is short -> rounding. The same for 2-based, but we now see funky effects on conversions too, because we print in decimal. – user3125367 Mar 05 '14 at 01:25
  • 1
    here you have a detailed explanation on what's going on or, in other words, how a float is stored into 32 bits. http://en.wikipedia.org/wiki/Single-precision_floating-point_format – Paolo Mar 05 '14 at 01:29
  • Related: [Why adding big to small in floating point introduce more error?](https://stackoverflow.com/q/53140098) for a similar case where rounding error is large compared to the smaller operand, but not equal to it. – Peter Cordes Nov 08 '21 at 21:20

4 Answers4

12

32-bit floats only have 24 bits of precision. Thus, a float cannot hold b exactly - it does the best job it can by setting some exponent and mantissa to get as close as possible1. (The nearest representable float to the constant in the source; the default FP rounding mode is "nearest".)

When you then consider the floating point representation of b and a, and try and add them, the addition operation will shift the small number a's mantissa downwards as it tries to match b's exponent, to the point where the value (3) falls off the end and you're left with 0. Hence, the addition operator ends up adding floating point zero to b. (This is an over-simplification; low bits can still affect rounding if there's partial overlap of mantissas.)

In general, the infinite-precision addition result has to get rounded to the nearest float with the current FP rounding mode, and that happened to be equal to b.

See also Why adding big to small in floating point introduce more error? for cases where the number changes some, but with large rounding error, for an example using decimal significant figures as a way to help understand binary float rounding.


Footnote 1: For numbers that large, the nearest two floats are 32 apart. Modern clang even warns about rounding of an int in the source to a float that represents a different value. Unless you write it as a float or double constant already (like 299792458.0f), in which case the rounding happens without warning.

That's why the smallest a value that will round sum up to 299792480.0f instead of down to 299792448.0f is about 16.000001 for that b value which rounded to 299792448.0f. Runnable example on the Godbolt compiler explorer.

The default FP rounding mode rounds to nearest with even mantissa as a tie-break. 16.0 goes exactly half-way, and thus round to a bit-pattern of 0x4d8ef3c2, not up to 0x4d8ef3c3. https://www.h-schmidt.net/FloatConverter/IEEE754.html. Anything slightly greater than 16 rounds up, because rounding cares about the infinite-precision result. It doesn't actually shift out bits before adding, that was an over-simplification. The nearest float to 16.000001 has only the low bit set in its mantissa, bit-pattern 0x41800001. It's actually about 1.0000001192092896 x 24, or 16.0000019... Much smaller and it would round to exactly 16.0f and would be <= 1 ULP (unit in the last place) of b, which wouldn't change b because b's mantissa is already even.


If you avoid early rounding by using double a,b, the smallest value you can add that rounds up 299792480.0f instead of down to 299792448.0f when you do float sum = a+b is about a=6.0000001;, which makes sense because the integer value ...58 stays as ...58.0 instead of rounding down to ...48.0f, i.e. the rounding error in float b = ...58 was -10, so a can be that much smaller.

There are two rounding steps this time, though, with a+b rounding to the nearest double if that addition isn't exact, then that double rounding to a float. (Or if FLT_EVAL_METHOD == 2, like C compiling for 80-bit x87 floating point on 32-bit x86, the + result would round to to 80-bit long double, then to float.)

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Chris McGrath
  • 1,936
  • 16
  • 17
  • are there 23 or 24 precision? – mharris7190 Mar 05 '14 at 01:33
  • 7
    23 stored, 1 implicit, total 24. – user3125367 Mar 05 '14 at 01:39
  • Is this behaviour guaranteed by standard to happen in C/C++? – TStancek May 24 '18 at 07:35
  • @TStancek: More or less, depending on [`FLT_EVAL_METHOD`](https://en.cppreference.com/w/c/types/limits/FLT_EVAL_METHOD), for C/C++ implementations that promise IEEE-754 `float`. Although unlike ISO C specifies, GCC can keep extra precision even across statements, not just within a single expression, when building for targets like 32-bit x86 with x87 FP (instead of SSE/SSE2). – Peter Cordes Nov 08 '21 at 22:38
  • @Chris: I made a significant edit to this answer, more than I intended to write when I started editing. I was initially just going to add a short example to show that "shifting the bits out" when aligning the mantissas isn't exactly what happens; low bits still matter for rounding. But explaining the details of that grew into a big section. If you want to trim this down, let me know and I can move what I wrote to a new answer. (If you want to keep it in your answer, that's great.) – Peter Cordes Nov 08 '21 at 22:59
  • @Chris McGrath What do you mean by "the default FP rounding mode is "nearest""? – John May 01 '22 at 08:38
  • @Chris McGrath "For numbers that large, the nearest two floats are ***32*** apart". Why it is 32? – John May 01 '22 at 08:47
3

Floating-point number have limited precision. If you're using a float, you're only using 32 bits. However some of those bits are reserved for defining the exponent, so that you really only have 23 bits to use. The number you give is too large for those 23 bits, so the last few digits are ignored.

To make this a little more intuitive, suppose all of the bits except 2 were reserved for the exponent. Then we can represent 0, 1, 2, and 3 without trouble, but then we have to increment the exponent. Now we need to represent 4 through 16 with only 2 bits. So the numbers that can be represented will be somewhat spread out: 4 and 5 won't both be there. So, 4+1 = 4.

Scott Lawrence
  • 1,023
  • 6
  • 14
2

All you really need to know about the mechanics of rounding is that the result you get is the closest float to the correct answer (with some extra rules that decide what to do if the correct answer is exactly between two floats). It just so happens that the smaller number you added is less than half the distance between two floats at that scale, so the result is indistinguishable from the larger number you added. This is correct, to within the limits of float precision. If you want a better answer, use a better-precision data type, like double.

hobbs
  • 223,387
  • 19
  • 210
  • 288
0

Another point-of-view: Pigeon hole principle

float is commonly encoded using 32-bits. Thus only about 232 different values can be exactly encoded.
299792458 is not one of them.

Commonly a float is encoded as a dyadic rational with a 24-bit significand times some power-of-2.

float b = 299792458;
// b typically takes on the closest representable float: 299792480.0
printf("%f\n", b); --> "299792448.000000"

The next larger representable float is 299792480.0 or 32 away.


Adding 299792448.0 + 3.0 is 299792451.0, but that also cannot be exactly encoded as a float. Per the current rounding mode (round to nearest), the sum is then again 299792448.0.

float a = 3;
float sum = a + b
printf("%f\n", sum); --> "299792448.000000"

Had a = 17; then the sum 299792448.0 + 17.0 is 299792465.0 would have rounded to 299792480.0.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256