87

The C++ FAQ lite "[29.17] Why doesn't my floating-point comparison work?" recommends this equality test:

#include <cmath>  /* for std::abs(double) */

inline bool isEqual(double x, double y)
{
  const double epsilon = /* some small number such as 1e-5 */;
  return std::abs(x - y) <= epsilon * std::abs(x);
  // see Knuth section 4.2.2 pages 217-218
}
  1. Is it correct, that this implies that the only numbers which are equal to zero are +0 and -0?
  2. Should one use this function also when testing for zero or rather a test like |x| < epsilon?

Update

As pointed out by Daniel Daranas the function should probably better be called isNearlyEqual (which is the case I care about).

Someone pointed out "Comparing Floating Point Numbers", which I want to share more prominently.

Micha Wiedenmann
  • 19,979
  • 21
  • 92
  • 137
  • 4
    I've a sentence in my head which says, never test a double to equal. Only greater or smaller. – user743414 Nov 07 '13 at 13:49
  • 10
    @user743414 in some scenarios, it is totally fine to test a double to equal. E.g. `if(counter > 10.0) { counter = 0.0; //dostuff }` and elsewhere in code: `if(counter == 0.0){//oh I know that counter is reseted} else{//do other stuff}`... – relaxxx Nov 07 '13 at 14:14
  • What do you actually want to do? As for question 1, then yes, the only values that compare equal to +0.0 (or indeed -0.0) are +0.0 and -0.0. But I don't see that the code in the question implies that. – David Heffernan Nov 07 '13 at 15:42
  • 2
    @relaxxx: counters are integers. – n. m. could be an AI Nov 07 '13 at 17:04
  • See also the related questions [Why do floating points have signed zeros?](http://stackoverflow.com/q/13544342/96780), [Is it safe to check floating point values for equality to 0 in C#/.NET?](http://stackoverflow.com/q/485175/96780) and [How to efficiently compare the sign of two floating-point values while handling negative zeros](http://stackoverflow.com/q/2922619/96780). – Daniel Daranas Nov 07 '13 at 17:09
  • 2
    possible duplicate of [Should we compare floating point numbers for equality against a \*relative\* error?](http://stackoverflow.com/questions/328475/should-we-compare-floating-point-numbers-for-equality-against-a-relative-error) – Ivan Aksamentov - Drop Nov 07 '13 at 17:47
  • See also http://stackoverflow.com/questions/17333/most-effective-way-for-float-and-double-comparison – Vadzim Aug 05 '16 at 14:10
  • Standard library includes DBL_EPSILON constant which is 1e-16. This is minimum floating point number which changes bitwise representation. Adding it to 2.0 does not change it's value at all, for 2.0 1e-16 is an absolute zero. That's why 2.0 == 2.0+1e-16 –  Feb 04 '17 at 13:02
  • I have some trouble understanding how this tests works for the 'nearly equal' situation. E.g. with y = 0 this reduces to `abs(x) <= abs(x) * epsilon`. Now it will take x to be exactly 0 for this to hold true. If x is an infinitesimal number such as 1e-14, the right-hand side will be smaller after the multiplication and the condition will be false. What am I missing? – shasan Aug 31 '17 at 16:08
  • @n.m. Unless you're counting the rate of something, that you want to specifically and exactly reset to `0.0` periodically. In this case you can safely execute the comparison. – pfabri Apr 27 '19 at 17:55
  • @pfabri You don't count rates, you calculate rates. You count *events* and divide the number by the time or whatever to calculate their rate. Events are discrete, they either occur or they don't. – n. m. could be an AI Apr 27 '19 at 19:45
  • Could you please explain the necessity of multiplication with the abs(x) on the right hand side? Thank you! – mabalenk Dec 09 '20 at 10:11
  • Provided equality code is wrong. Use `return std::abs(x - y) <= epsilon * (std::abs(x) + std::abs(y));` I am here just because it happened to me. – Chameleon Jun 12 '22 at 20:45

9 Answers9

57

You are correct with your observation.

If x == 0.0, then abs(x) * epsilon is zero and you're testing whether abs(y) <= 0.0.

If y == 0.0 then you're testing abs(x) <= abs(x) * epsilon which means either epsilon >= 1 (it isn't) or x == 0.0.

So either is_equal(val, 0.0) or is_equal(0.0, val) would be pointless, and you could just say val == 0.0. If you want to only accept exactly +0.0 and -0.0.

The FAQ's recommendation in this case is of limited utility. There is no "one size fits all" floating-point comparison. You have to think about the semantics of your variables, the acceptable range of values, and the magnitude of error introduced by your computations. Even the FAQ mentions a caveat, saying this function is not usually a problem "when the magnitudes of x and y are significantly larger than epsilon, but your mileage may vary".

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
20

No.

Equality is equality.

The function you wrote will not test two doubles for equality, as its name promises. It will only test if two doubles are "close enough" to each other.

If you really want to test two doubles for equality, use this one:

inline bool isEqual(double x, double y)
{
   return x == y;
}

Coding standards usually recommend against comparing two doubles for exact equality. But that is a different subject. If you actually want to compare two doubles for exact equality, x == y is the code you want.

10.000000000000001 is not equal to 10.0, no matter what they tell you.

An example of using exact equality is when a particular value of a double is used as a synonym of some special state, such as "pending calulation" or "no data available". This is possible only if the actual numeric values after that pending calculation are only a subset of the possible values of a double. The most typical particular case is when that value is nonnegative, and you use -1.0 as an (exact) representation of a "pending calculation" or "no data available". You could represent that with a constant:

const double NO_DATA = -1.0;

double myData = getSomeDataWhichIsAlwaysNonNegative(someParameters);

if (myData != NO_DATA)
{
    ...
}
Daniel Daranas
  • 22,454
  • 9
  • 63
  • 116
  • 22
    Your `isEqual` function will surprise you one day :) Probably when you realize how precise floating points are. – BЈовић Nov 07 '13 at 16:34
  • 26
    @BЈовић No. It will not surprise me. I will seldom use it, but when I do, I will mean to check that one double is *equal* to another one. 10.00000000000000001 is not equal to 10.0. – Daniel Daranas Nov 07 '13 at 16:52
  • 2
    It depends what is the error. Anyway, floating registers can be 80bits, and float values only 32bits. Comparing floats can some times be quite tricky. – BЈовић Nov 07 '13 at 16:56
  • @kfsone The case of 0.0 and -0.0 can be dealt with specifically, and not involving a general introduction of an "epsilon" tolerance in each comparison between doubles. That said, I seldom (if ever) actually _need_ to check exact equality between doubles in practice. The need to check for `IsNearlyEqual` is far more common - but I just don't pretend or document that I'm checking for equality when I am not. That's the real difference. Again, 10.00000000000000001 is _not_ equal to 10.0. (FWIW, I added a comment in this question with links to related questions.) – Daniel Daranas Nov 07 '13 at 17:05
  • @kfsone Note also that "(-0.0 == 0.0) returns `true`". I am quoting from [this answer](http://stackoverflow.com/a/13544352/96780). – Daniel Daranas Nov 07 '13 at 17:33
  • 1
    @kfsone Considering that `+0.0 == -0.0` for any compiler that implements either exactly IEEE 754 or something close to IEEE 754, I have no idea what you are talking about. `x == 0.0` is true if and only if `x` is `-0.0` or `+0.0`. – Pascal Cuoq Nov 07 '13 at 17:35
  • @DanielDaranas I care about the *is nearly* case. – Micha Wiedenmann Nov 07 '13 at 17:43
  • You missed my point: emphasis in my comment on "not as hard". Absolute equality is not always meaningful with doubles. In most quantized systems there is a lower bound to the delta between values that you consider equality. Using "==" is the wrong way to perform such a comparison. If two objects in a simulation are arriving at a collision position from different directions, it is *incredibly* likely that their coordinates will not be magical ideals for float representation, and so `a - b == 0.0` will not be true. – kfsone Nov 07 '13 at 18:04
  • 7
    I guess ultimately the problem is people using floating precision when what they actually want is fixed. Daniel's argument (10.00000000000000001 is not 10.0) is fair, except that people don't expect it to be a factor when they are comparing "5.0 + 5.0" vs "15.0 - 5.0". – kfsone Nov 07 '13 at 20:07
  • 5
    The trouble with Daniel's argument is that 10.00000000000000001 is also not 10.00000000000000001 as soon as I store it as a float. floats are not exact, they are approximations. – DaveB Mar 13 '15 at 14:14
  • 12
    @user3698909 This does not contradict my argument. My argument is that equal values are exactly that, **equal** values . If you're interested in knowing whether two floating point values are equal, test them for equality. If you're not interested in that, then don't test them for equality - test whether they are close enough. – Daniel Daranas Mar 13 '15 at 14:39
  • 1
    1e-63 instead of zero would surprise yous someday –  Jan 31 '17 at 00:27
  • 3
    @Yuri No, it wouldn't. – Daniel Daranas Jan 31 '17 at 08:41
  • 2
    @DanielDaranas You should get familiar with https://en.wikipedia.org/wiki/IEEE_floating_point Comparing floating points which differ less than DBL_EPSILON does not make sense for such numbers, for them DBL_EPSILON is a 0.0 and does not exists. For example, 1.0 == 1 + 1e-16, because 1e-16 is a difference that could not be represented in FP bitwise form –  Feb 04 '17 at 12:27
  • 2
    @YuriS.Cherkasov Equality is equality. Either two expressions of type double are equal, or they are not. Once two values are of type double, it makes sense to check whether they are equal, if you are interested in knowing that. – Daniel Daranas May 08 '19 at 07:51
  • 2
    @YuriS.Cherkasov There is nothing about the situation you describe that I don't understand, or that contradicts my answer. – Daniel Daranas May 22 '19 at 08:53
  • 2
    This answer contains false information: "10.00000000000000001 is not equal to 10.0, no matter what they tell you". Did you try the code before posting it? https://onlinegdb.com/SJ4SMWv-L – barsdeveloper Jan 23 '20 at 11:24
  • 2
    @barsan-md My point was that equal means equal, not "extremely close to". My example was wrong, for doubles. I edited the answer to use 10.000000000000001. – Daniel Daranas Jan 24 '20 at 12:58
  • 1
    @barsan-md Code is not "extremely bad" if it serves the purpose you designed it for. So a certain piece of code, out of context, cannot be "extremely bad", except when you deliberately choose to write obfuscated or misleading instructions. About the comparison, it is also equal in short integers. You can try it, too. – Daniel Daranas Jan 27 '20 at 11:04
  • 1
    @DanielDaranas What is an example of a calculation that is guaranteed - for all inputs - to produce floats or doubles that are exact representations of the values that were calculated, and thus suitable for direct equality comparisons? I've lost track fo the bugs I've had to fix because of an engineer asserting that "10.000000000000001 is not 10.0" and using ==. Confused nanosecond timers, reversed gps coordinates, UIs that drove you nuts, because 10.000000000000001 is not 10.0, but sometimes neither is 5 + 5 (et pol). https://gcc.godbolt.org/z/qdPds6 – kfsone Sep 03 '20 at 01:00
  • `float a = 10.2f - 0.1f, b = 10.0f + 0.1f;` will give you `false` when `a == b`. Is that what you expect? – IC_ Apr 30 '21 at 12:58
  • 3
    @Herrgott What I expect doesn't matter. The expressions `10.2f - 0.1f` and `10.0f + 0.1f` are both valid expressions of type float. If after evaluating them, they result in different float values, then `(10.2f - 0.1f) == (10.0f + 0.1f)` is false. That is exactly what `==` means: true if the expressions at its both sides are equal, false if they aren't. – Daniel Daranas Apr 30 '21 at 15:15
20

If you are only interested in +0.0 and -0.0, you can use fpclassify from <cmath>. For instance:

if( FP_ZERO == fpclassify(x) ) do_something;

DaBler
  • 2,695
  • 2
  • 26
  • 46
  • convenient for templates where you don't know the floating point (or integral) type - it works for integral types. – mheyman Feb 17 '23 at 19:51
6

You can use std::nextafter with a fixed factor of the epsilon of a value like the following:

bool isNearlyEqual(double a, double b)
{
  int factor = /* a fixed factor of epsilon */;

  double min_a = a - (a - std::nextafter(a, std::numeric_limits<double>::lowest())) * factor;
  double max_a = a + (std::nextafter(a, std::numeric_limits<double>::max()) - a) * factor;

  return min_a <= b && max_a >= b;
}
Daniel Laügt
  • 1,097
  • 1
  • 12
  • 17
4

2 + 2 = 5(*)

(for some floating-precision values of 2)

This problem frequently arises when we think of"floating point" as a way to increase precision. Then we run afoul of the "floating" part, which means there is no guarantee of which numbers can be represented.

So while we might easily be able to represent "1.0, -1.0, 0.1, -0.1" as we get to larger numbers we start to see approximations - or we should, except we often hide them by truncating the numbers for display.

As a result, we might think the computer is storing "0.003" but it may instead be storing "0.0033333333334".

What happens if you perform "0.0003 - 0.0002"? We expect .0001, but the actual values being stored might be more like "0.00033" - "0.00029" which yields "0.000004", or the closest representable value, which might be 0, or it might be "0.000006".

With current floating point math operations, it is not guaranteed that (a / b) * b == a.

#include <stdio.h>

// defeat inline optimizations of 'a / b * b' to 'a'
extern double bodge(int base, int divisor) {
    return static_cast<double>(base) / static_cast<double>(divisor);
}

int main() {
    int errors = 0;
    for (int b = 1; b < 100; ++b) {
        for (int d = 1; d < 100; ++d) {
            // b / d * d ... should == b
            double res = bodge(b, d) * static_cast<double>(d);
            // but it doesn't always
            if (res != static_cast<double>(b))
                ++errors;
        }
    }
    printf("errors: %d\n", errors);
}

ideone reports 599 instances where (b * d) / d != b using just the 10,000 combinations of 1 <= b <= 100 and 1 <= d <= 100 .

The solution described in the FAQ is essentially to apply a granularity constraint - to test if (a == b +/- epsilon).

An alternative approach is to avoid the problem entirely by using fixed point precision or by using your desired granularity as the base unit for your storage. E.g. if you want times stored with nanosecond precision, use nanoseconds as your unit of storage.

C++11 introduced std::ratio as the basis for fixed-point conversions between different time units.

kfsone
  • 23,617
  • 2
  • 42
  • 74
1

Like @Exceptyon pointed out, this function is 'relative' to the values you're comparing. The Epsilon * abs(x) measure will scale based on the value of x, so that you'll get a comparison result as accurately as epsilon, irrespective of the range of values in x or y.

If you're comparing zero(y) to another really small value(x), say 1e-8, abs(x-y) = 1e-8 will still be much larger than epsilon *abs(x) = 1e-13. So unless you're dealing with extremely small number that can't be represented in a double type, this function should do the job and will match zero only against +0 and -0.

The function seems perfectly valid for zero comparison. If you're planning to use it, I suggest you use it everywhere there're floats involved, and not have special cases for things like zero, just so that there's uniformity in the code.

ps: This is a neat function. Thanks for pointing to it.

Arun R
  • 873
  • 6
  • 10
1

Simple comparison of FP numbers has it's own specific and it's key is the understanding of FP format (see https://en.wikipedia.org/wiki/IEEE_floating_point)

When FP numbers calculated in a different ways, one through sin(), other though exp(), strict equality won't be working, even though mathematically numbers could be equal. The same way won't be working equality with the constant. Actually, in many situations FP numbers must not be compared using strict equality (==)

In such cases should be used DBL_EPSIPON constant, which is minimal value do not change representation of 1.0 being added to the number more than 1.0. For floating point numbers that more than 2.0 DBL_EPSIPON does not exists at all. Meanwhile, DBL_EPSILON has exponent -16, which means that all numbers, let's say, with exponent -34, would be absolutely equal in compare to DBL_EPSILON.

Also, see example, why 10.0 == 10.0000000000000001

Comparing dwo floating point numbers depend on these number nature, we should calculate DBL_EPSILON for them that would be meaningful for the comparison. Simply, we should multiply DBL_EPSILON to one of these numbers. Which of them? Maximum of course

bool close_enough(double a, double b){
    if (fabs(a - b) <= DBL_EPSILON * std::fmax(fabs(a), fabs(b)))
    {
        return true;
    }
    return false;
}

All other ways would give you bugs with inequality which could be very hard to catch

  • it's not added because mantissa is used by integer part. 10.00000000000000001 just cannot be represented by double. – Volodymyr Boiko Jun 26 '18 at 17:45
  • 1
    The floating point expression if fine, but I see no reason why it shouldn't directly return the boolean value of the test expression rather than putting it inside some if and returning True or False. Of course the compiler will optimize that anyway, but no need to make the code verboser than needed. – kriss Feb 01 '22 at 15:35
1

Consider this example:

bool isEqual = (23.42f == 23.42);

What is isEqual? 9 out of 10 people will say "It's true, of course" and 9 out of 10 people are wrong: https://rextester.com/RVL15906

That's because floating point numbers are no exact numeric representations.

Being binary numbers, they cannot even exactly represent all numbers that can be exact represented as decimal numbers. E.g. while 0.1 can be exactly represented as a decimal number (it is exactly the tenth part of 1), it cannot be represented using floating point because it is 0.00011001100110011... periodic as binary. 0.1 is for floating point what 1/3 is for decimal (which is 0.33333... as decimal)

The consequence is that calculations like 0.3 + 0.6 can result in 0.89999999999999991, which is not 0.9, albeit it's close to that. And thus the test 0.1 + 0.2 - 0.3 == 0.0 might fail as the result of the calculation may not be 0, albeit it will be very close to 0.

== is an exact test and performing an exact test on inexact numbers is usually not very meaningful. As many floating point calculations include rounding errors, you usually want your comparisons to also allow small errors and this is what the test code you posted is all about. Instead of testing "Is A equal to B" it tests "Is A very close to B" as very close is quite often the best result you can expect from floating point calculations.

Mecki
  • 125,244
  • 33
  • 244
  • 253
0

notice, that code is:

std::abs((x - y)/x) <= epsilon

you are requiring that the "relative error" on the var is <= epsilon, not that the absolute difference is

Exceptyon
  • 1,584
  • 16
  • 23