2

I have gone through different threads for comparing lesser or greater float value not equal comparison but not clear do we need epsilon value logic to compare lesser or greater float value as well?

e.g ->

float a, b;
 if (a < b) // is this correct way to compare two float value or we need epsilon value for lesser comparator 
{
}
if (a > b) // is this correct way to compare two float value for greater comparator
{
}

I know for comparing for equality of float, we need some epsilon value

bool AreSame(double a, double b)
{
    return fabs(a - b) < EPSILON;
}
Jarod42
  • 203,559
  • 14
  • 181
  • 302
Law Kumar
  • 53
  • 1
  • 9
  • 2
    _"I know for comparing for equality of float, we need some epsilon value"_ — No, we don't need some epsilon. It's completely application-dependent. – Daniel Langr May 18 '22 at 06:51
  • Yes if your application depends on a specific accuracy level.. Any operation involving floating points will likely introduce errors. Even two values that appear to be the same will often differ, even if by 0.000001. – ChrisBD May 18 '22 at 06:52

5 Answers5

1

It really depends on what should happen when both value are close enough to be seen as equal, meaning fabs(a - b) < EPSILON. In some use cases (for example for computing statistics), it is not very important if the comparison between 2 close values gives or not equality.

If it matters, you should first determine the uncertainty of the values. It really depends on the use case (where the input values come from and how they are processed), and then 2 value differing by less than that uncertainty should be considered as equal. But that equality is not longer a true mathematical equivalence relation: you can easily imagine how to build a chain a close values between 2 truely different values. In math words, the relation is not transitive (or is almost transitive is current language words).

I am sorry but as soon as you have to process approximations there cannot be any precise and consistent way: you have to think of the real world use case to determine how you should handle the approximation.

Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252
0

When you are working with floats, it's inevitable that you will run into precision errors.

In order to mitigate this, when checking for the equality two floats we often check if their difference is small enough.

For lesser and greater, however, there is no way to tell with full certainty which float is larger. The best (presumably for your intentions) approach is to first check if the two floats are the same, using the areSame function. If so return false (as a = b implies that a < b and a > b are both false).

Otherwise, return the value of either a < b or a > b.

Ryan Zhang
  • 1,856
  • 9
  • 19
  • (a) Check for supposed “equality” first and then testing `a < b` is equivalent to merely testing `a < b+e` for some value of `e`, so it is wasteful. (b) The stated test produces “false” if `areSame` returns true. But when a program is testing `a < b`, we do not know what result is best to use when the ideal mathematical result cannot be known. For some applications, if the ideal `a` < `b` cannot be ruled out, `a < b` should evaluate as true. For some applications, it should evaluate as false. So no single comparison method should be recommended; it must be application-dependent. – Eric Postpischil May 18 '22 at 13:14
0

The answer is application dependent.

If you are sure that a and b are sufficiently different that numerical errors will not reverse the order, then a < b is good enough.

But if a and b are dangerously close, you might require a < b + EPSILON. In such a case, it should be clear to you that < and ≤ are not distinguishable.

Needless to say, EPSILON should be chosen with the greatest care (which is often pretty difficult).

  • @Jarod42: nope, if a is perturbed by EPSILON, it can exceed b. Otherwise, you are in the first scenario. –  May 18 '22 at 08:57
0

It ultimately depends on your application, but I would say generally no.

The problem, very simplified, is that if you calculate: (1/3) * 3 and get the answer 0.999999, then you want that to compare equal to 1. This is why we use epsilon values for equal comparisons (and the epsilon should be chosen according to the application and expected precision).

On the other hand, if you want to sort a list of floats then by default the 0.999999 value will sort before 1. But then again what would the correct behavior be? If they both are sorted as 1, then it will be somewhat random which one is actually sorted first (depending on the initial order of the list and the sorting algorithm you use).

The problem with floating point numbers is not that they are "random" and that it is impossible to predict their exact values. The problem is that base-10 fractions don't translate cleanly into base-2 fractions, and that non-repeating decimals in one system can translate into repeating one in the other - which then result in rounding errors when truncated to a finite number of decimals. We use epsilon values for equal comparisons to handle rounding errors that arise from these back and forth translations.

But do be aware that the nice relations that ==, < and <= have for integers, don't always translate over to floating points exactly because of the epsilons involved. Example:

  • a = x
  • b = a + epsilon/2
  • c = b + epsilon/2
  • d = c + epsilon/2

Now: a == b, b == c, c == d, BUT a != d, a < d. In fact, you can continue the sequence keeping num(n) == num(n+1) and at the same time get an arbitrarily large difference between a and the last number in the sequence.

Frodyne
  • 3,547
  • 6
  • 16
  • 1
    See [does-using-epsilon-in-comparison-of-floating-point-break-strict-weak-ordering](https://stackoverflow.com/questions/68114060/does-using-epsilon-in-comparison-of-floating-point-break-strict-weak-ordering) about using epsilon in comparison in `std::sort` function/`std::map` or any method which requires strict weak ordering. – Jarod42 May 18 '22 at 08:51
  • @Jarod42 Thank you, that was exactly what I was trying to say in the last half - only you said it better and more succinctly. :) – Frodyne May 18 '22 at 10:43
  • “The problem is that base-10 fractions don't translate cleanly into base-2 fractions” is not a correct categorization of “the problem.” Converting between different bases is **a** problem but is not the only problem, nor even the major problem. With **any** fixed-precision arithmetic format, whether floating-point, fixed-point, or integer, whether decimal or binary or some other base, not all real numbers can be represented, and therefore there must be some sort of arithmetic errors… – Eric Postpischil May 18 '22 at 13:04
  • … Even when decimal numerals can be converted to floating-point exactly, with no rounding errors, there will generally be rounding errors when arithmetic is performed. Dividing a number by any number that is not a power of two (for base-two floating-point) must round the result. Multiplying numbers such that the significand exceeds the representable precision must round the result. So must adding and subtracting. Evaluating functions like square root, logarithm, and sine generally must round the results. – Eric Postpischil May 18 '22 at 13:06
  • “But do be aware that the nice relations that `==`, `<` and `<=` have for integers” is also a miscategorization. The operations `==`, `<`, and `<=` are perfect in both integer arithmetic and floating-point arithmetic: They produce the result “true” if and only if the mathematical relation between the operands is equality, less than, or less-than-or-equal, respectively. The problem is not in these operations but is in the operands. With floating-point operands, it is quite common for there to be errors in the operands such that the desired mathematical relationships do not hold… – Eric Postpischil May 18 '22 at 13:08
  • … and applying the operators merely reflects that. For example, if `x` and `y` have been calculated with floating-point arithmetic, and `x` would be greater than `y` if there had been no rounding errors, but the computed result has `x` less than `y`, then `x < y` correctly evaluates as “true,” even though the desired result is “false.” This is true for integer arithmetic too; if calculating `x` used some division that discarded a remainder, and this caused `x` to be less than `y` even though the desired `x` would be greater than `y`, then `x < y` evaluates as “true,” not the desired “false.” – Eric Postpischil May 18 '22 at 13:10
  • @EricPostpischil Thank you for your comments, you are (generally) right but you do also overstate a bit: "Dividing a number by any number that is not a power of two (for base-two floating-point) must round the result." 5/8 can be exactly represented in base-2 floating point, divide it by 5 and you get 1/8 which is also exactly representable in base-2 floating point. But then again a full deep dive into all the tings that can (and cannot) go wrong with floating point numbers can (and have) filled a book. – Frodyne May 18 '22 at 13:22
  • @EricPostpischil As for the "nice relations", again I agree with your comments. But my point was that the relations break when you add epsilons into the mix. For pure floating point, then they are as correct as they are for integers (-ish, NAN does not exist for integers), but if you add epsilon-closeness to the floating point versions then you break strict-weak-ordering and maybe more. – Frodyne May 18 '22 at 13:26
-1

As others have stated, there would always be precision errors when dealing with floats.

Thus, you should have an epsilon value even for comparing less than / greater than.

We know that in order for a to be less than b, firstly, a must be different from b. Checking this is a simple NOT equals, which uses the epsilon. Then, once you already know a != b, the operator < is sufficient.

Hung Thai
  • 229
  • 2
  • 7
  • “We know that in order for `a` to be less than `b`, firstly, `a` must be different from `b`”: We do not know that. It is possible that the value of `a`, if computed with ideal real-number arithmetic, would be less than the value of `b` computed with ideal real-number arithmetic, but for the rounding effects of floating-point arithmetic to have produce a computed `a` that equals the computed `b` or even an `a` that exceeds `b`. Merely testing “NOT equals” first is not a universal solution. Solutions are application-dependent (and may not exist for a particular application). – Eric Postpischil May 18 '22 at 13:17