1

I've read several resource on the network and I understood there's no a single value or universal parameters when we compare float numbers. I've read from here several replies and I found the code from Google test to compare the floats. I want to better understand the meaning of ULP and its value. Reading comments from source code I read:

The maximum error of a single floating-point operation is 0.5 units in the last place. On Intel CPU's, all floating-point calculations are done with 80-bit precision, while double has 64 bits. Therefore, 4 should be enough for ordinary use.

It's not really clear why "therefore 4 should be enough". Can anyone explain why? From my understanding we are saying that we can tolerate 4*10^-6 or 4*10^-15 as difference between our numbers to say if they are the same or not, taking into account the number of significant digits of float (6/7) or double (15/16). Is it correct?

greywolf82
  • 21,813
  • 18
  • 54
  • 108
  • More reading (if you are interested - not my blog): https://randomascii.wordpress.com/2013/02/07/float-precision-revisited-nine-digit-float-portability/ The whole series is very informative. – Richard Critten May 15 '19 at 17:12
  • To be clear, it is not "difference between our numbers", but _relative_ "difference between our numbers" -relative to the magnitude of the operands/result. It can get complicated. – chux - Reinstate Monica May 15 '19 at 23:17

1 Answers1

3

It is wrong. Very wrong. Consider that every operation can accumulate some error—½ ULP is the maximum (in round-to-nearest mode), so ¼ might be an average. So 17 operations are enough to accumulate more than 4 ULP of error just from average effects.1 Today’s computers do billions of operations per second. How many operations will a program do between its inputs and some later comparison? That depends on the program, but it could be zero, dozens, thousands, or millions just for “ordinary“ use. (Let’s say we exclude billions because then it gets slow for a human to use, so we can call that special-purpose software, not ordinary.)

But that is not all. Suppose we add a few numbers around 1 and then subtract a number that happens to be around the sum. Maybe the adds get a total error around 2 ULP. But when we subtract, the result might be around 2−10 instead of around 1. So the ULP of 2−10 is 1024 times smaller than the ULP of 1. That error that is 2 ULP relative to 1 is 2048 ULP relative to the result of the subtraction. Oops! 4 ULP will not cut it. It would need to be 4 ULP of some of the other numbers involved, not the ULP of the result.

In fact, characterizing the error is difficult in general and is the subject of an entire field of study, numerical analysis. 4 is not the answer.

Footnote

1 Errors will vary in direction, so some will cancel out. The behavior might be modeled as a random walk, and the average error might be proportional to the square root of the number of operations performed.

Community
  • 1
  • 1
Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • Are you sure about excluding billions? Many engineering and scientific applications solve systems of a few thousand equations. – Patricia Shanahan May 16 '19 at 18:30
  • @PatriciaShanahan: I am just being conservative about what is “ordinary,” limiting it to applications a human would use interactively, to make the point that 4 is way too low even for conservative meanings. – Eric Postpischil May 16 '19 at 18:52