2

Recently I got into a discussion about floating point comparison. My point of view was always to not compare two floating point numbers using == directly.

It was pointed out that this is not true and there are cases where using == is perfectly fine. I can think of typical cases, where I check against IEEE 754 literals, as +-INF or +-0, but apart from that, I cannot think of a case where this wouldn’t lead to problems.

So my question is: What are the cases when a floating point comparison using == is valid?

MatthiasB
  • 1,759
  • 8
  • 18
  • `1. == 1.`? And checking for `NAN` using `==` won't get you very far. – Praetorian Jun 11 '14 at 07:31
  • 1
    It's valid whenever you really want equality, rather than "close enough that it would probably be equal under exact arithmetic". 0 is a common case of that but just as often, you want "close enough to 0". –  Jun 11 '14 at 07:33
  • I think that if you're applying the same calculation* to the same input the numbers should be identical so in this case a `==` comparison should be valid. (*calculation must be deterministic) – ZivS Jun 11 '14 at 07:37
  • @ShaZiv I remember something about different precision of variables inside the fpu and on memory. If you calculate a value, store it back in memory, then calculate the next (same) value, load back the first result from memory and then compare, the check might yield false – MatthiasB Jun 11 '14 at 07:40
  • Answered here: http://stackoverflow.com/questions/4682889/is-floating-point-ever-ok?rq=1 – NiRR Jun 11 '14 at 07:43
  • `==` means equality. If you want to see if two floating point numbers are equal, use `==`. If you want something else, for example checking if two numbers are very close, then don't use `==`, because `==` means equality. Language constructs are neither right or wrong - they just have a meaning, which you should know well before deciding to use them. – Daniel Daranas Jun 11 '14 at 08:42
  • ISTM that this is not a duplicate. The other questions answers if it is OK. The answers say: no, generally it isn't. This question asks WHEN it is ok, which is not answered in the other question. – Rudy Velthuis Jun 11 '14 at 14:00
  • Reopening, I don't believe this is a duplicate of the listed question – Shafik Yaghmour Apr 28 '15 at 14:40

3 Answers3

5

The double-precision floating point representation (64-bit per number) is exact for integers up to -+2**53 (-+ 9,007,199,254,740,992). If you are using floating point numbers but starting from integers and doing integer computations with them and you never passed that limit then the result is exact and using == is perfectly fine.

Numbers that in general can be represented exactly are N/M where N is integer and M is a power of two. Thus if you're just doing computations involving e.g. 1/4, 1/2, 3/4 and integer multiples of them you're fine too until you reach very big multipliers.

When instead you deal with numbers that cannot be represented exactly (e.g. 0.1) the approximation introduced my lead to surprising results. One source of problems is that intermediate results may be stored in temporaries with higher precision and thus the result of a formula may be different depending on if you store it in memory explicitly or not and it may also change depending on the optimization level.

6502
  • 112,025
  • 15
  • 165
  • 265
  • Just a nitpick. The "*and integer multiples of them*" part in the example is superfluous since you said `N` is an integer, because an integer multiplier of an integer is an integer too. Well, you could have made it deliberately, but it looks strange to me. I also think an equivalent definition of the set would be "any linear combination of powers of two" (unless the numbers get too big or too small of course). – luk32 Jun 11 '14 at 07:56
1

Floating point numbers represent exact values using the corresponding base (normally 2). There is nothing wrong to compare them using equality.

Binary floating point numbers can't represent all decimal values exactly, though, i.e., for most fractional decimal values a binary floating point will use an approximation. As long as the decimal numbers don't exceed std::numeric_limits<F>::digits10 the resulting representation is uniquely identified, too, within a system (for some decimal values there is a chiice between two binary representations in which case the rounding direction should choose the correct one).

The issue which makes floating point numbers a bir weird is that computations result in rounding of values and depending on when rounding occurs supposedly exact operations are inexact and order of evaluation matters. Doing arithmetic on rounded values will correspondingly increase errors and yield different values than those obtained, e.g., by conversion from a decimal value to a binary floating point. You probably don't want to use equaliy operations for results of computions.

Dietmar Kühl
  • 150,225
  • 13
  • 225
  • 380
1

Here are some examples of valid uses of floating-point equality:

  • when a function is documented as returning HUGE_VAL in some cases, determining whether this happened with result == HUGE_VAL.

  • determining if a double d contains a number representable as a float: d == (double)(float)d. I actually use this in my day job, because I use pairs of doubles to represent both intervals of double values and intervals of float values, and there are points where it is nice to be able to assert that the bounds of an interval of floats are floats.

  • determining whether a floating-point number y is NaN: y == y.

  • determining if a floating-point number y is infinity or NaN with y - y == 0.0 (finite values of y make the condition true, NaN and infinities make it false).

  • determining if a bit is set in the significand of a floating-point number, as in the example below, taken from this rant.

    /* coef is a power of two plus one. */
    double t = coef * f;
    double o = f - t + t - f;
    if (o != 0)
    {
      ...
    
Community
  • 1
  • 1
Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • Can you comment on the case with [whole numbers like this question has](http://stackoverflow.com/q/29904728/1708801) – Shafik Yaghmour Apr 28 '15 at 14:41
  • @ShafikYaghmour No, life is too short, the mandatory “compare up to epsilon” crappy answer has already been upvoted 4 times. Not worth it. – Pascal Cuoq Apr 28 '15 at 14:52
  • Understood, thank you for taking a look. – Shafik Yaghmour Apr 28 '15 at 14:57
  • @PascalCuoq when i compare a variable `float x = 3.5` with `3.5` it returns equal/true but not when i do `float x = 3.4` and `if(x == 3.4){ returns true}`. If I print the data type of both x and 3.4, I see x is a float but 3.4 is considered as double by the compiler, this looks like one of the reasons why the latter one evaluates as false. But then how the former one evaluates to true? – y_159 Jun 10 '21 at 08:10