1

Suppose I have two functions f1 and f2 that return a double and get the same input (a const reference to some kind of object).

These functions are designed in such a way that given an input x,

f1(x) <= f2(x) 

should always hold.

When testing this assertion on a set of 1000 input instances, a small subset of instances fails. What's remarkable is that in all such cases f1(x) is greater than f2(x) by a delta that's less than 10^-13.

The following code sample is sketchy, but it should be enough for demonstration purposes:

const InputInstance x{...};
const double a{f1(x)};
const double b{f2(x)};
assert(a <= b);

In some other file, I have the functions f1 and f2 declared as follows:

const double f1(const InputInstance& x);
const double f2(const InputInstance& x);

The following code

printf("FLT_RADIX = %d\n", FLT_RADIX);
printf("DBL_DIG = %d\n", DBL_DIG);
printf("DBL_MANT_DIG = %d\n", DBL_MANT_DIG);

prints:

FLT_RADIX = 2
DBL_DIG = 15
DBL_MANT_DIG = 53

on my system.

As far as I understand this correctly, I could expect the output doubles to coincide up to the 15th decimal. Right?

Should I avoid using the '<=' operator on doubles? Has the 13th decimal a meaning I'm not aware of, or should I stop complaining and look for a bug in my code ;-) ?

  • 1
    This is just rounding errors isn't it? You're lucky that they only appear in the 13th decimal place. Floating point arithmetic is not accurate and errors accumulate. – john Dec 20 '18 at 10:07
  • Concise answer: Using operator `<=` is just fine. But you must not assume that results of separate calculations that are close to each other would have the same ordering in floating point math as they do in real math. "Close" is relative and depends on the desired precision as well as the amount of error accumulated into the result. – eerorika Dec 20 '18 at 10:11
  • @john -- floating-point arithmetic is 100% accurate. But floating-point values are not real numbers, so the results are different from what you'd get with real numbers. It's really the same issue as with `int` values: (1/3)*3 is not 1, and (1.0/3.0)*3.0 is not 1.0. The underlying problem is that floating-point math is not taught well, probably because most people don't understand it. – Pete Becker Dec 20 '18 at 13:36

0 Answers0