0

Possible Duplicate:
Most effective way for float and double comparison

I am new to C++. I had a doubt, while reading C++. How to decide two floating point numbers equal to each other or not ?

Thanks in advance

Community
  • 1
  • 1
Jholar99
  • 11
  • 2

3 Answers3

5

There is a special constant you need to know of, called DBL_EPSILON (or FLT_EPSILON). This is the smallest value that could be added to 1.0 and change its value. The value 1.0 is very important — larger numbers do not change when added to DBL_EPSILON. Now, you can scale this value to the numbers you are comparing to tell whether they are different or not. The correct expression for comparing two doubles is:

if (fabs(a-b) <= DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
    // ...
}
Don Reba
  • 13,814
  • 3
  • 48
  • 61
  • `FLT_EPSILON` is **not** the smallest value that can be added to 1.0f and change its value. http://blog.frama-c.com/index.php?post/2013/05/09/FLT_EPSILON – Pascal Cuoq May 13 '13 at 17:59
2

If your floating point types use IEEE 754 representation (most likely this is the case), then you should use the fact that the ordering of the binary representation of floats is the same as the ordering by value. That is, if you increment the binary representation of a float by one bit, you get the next larger number.

Using this fact, we can compare floats by counting their binary difference. This is called "comparison by unit-in-last-place (ULP)". There are some subtleties involving signs, zeros, infinities and NaNs, but that's the gist of it. Here is a comprehensive article explaining this.

Basically, we consider two floats equal if they differ in some small number of units in last place. Together with your compiler's documentation of its math functions' accuracies in last place and your own code you can determine which cut-off suits your needs.

In pseudo code:

double x, y;

// this is type punning, should be done differently in reality
uint64_t ux = *reinterpret_cast<const uint64_t*>(&x);
uint64_t uy = *reinterpret_cast<const uint64_t*>(&y);

return abs(ux - uy) < CUT_OFF; // e.g. CUT_OFF = 3;

The above code is just a crude example which won't work, you have to take care of lots of special cases before this final comparison. See the article for details.

Kerrek SB
  • 464,522
  • 92
  • 875
  • 1,084
  • I love this idea, but won't the last line in your example choke if ux – spraff Jul 29 '11 at 13:32
  • @Spraff: This is just the rough gist. You'll have to implement it more carefully, indeed (e.g. cast to signed int first, or compare first and then subtract in the right order). – Kerrek SB Jul 29 '11 at 13:44
  • I suppose you could require that all non-mantissa parts be equal and then mask them out... – spraff Jul 29 '11 at 13:47
  • Well, any sensible implementation would start with `if (sign(a) != sign(b)) return a == b`, so we'd probably not have to worry about that later. – Kerrek SB Jul 29 '11 at 13:49
  • I meant if the exponent is different then there is an order of magnitude difference in the values, which is probably different enough :-P – spraff Jul 29 '11 at 14:05
  • @spraff: Yeah, you could certainly do that. In the grand scheme of things, though, I'm not sure if that'd would save any steps compared to jumping straight to the integer comparison, but sure, it's a possibility! – Kerrek SB Jul 29 '11 at 14:12
  • It would eliminate a large category of mismatches in one go, including the problem case I just described. Sign-extend and compare the mantissa afterwards. – spraff Jul 29 '11 at 15:11
1

Obviously, you should not use operator == to compare them.

The important concept here is if the difference of your two floating point number is small enough to the precision requirement of your problem to solve or smaller than your error range, we should consider them as equal.

There are some practical methods suggestions such as

  fabs(f1 - f2) < precision-requirement
  fabs(f1 - f2) < max(fabs(f1), fabs(f2)) * percentage-precision-requirement
Saurabh Gokhale
  • 53,625
  • 36
  • 139
  • 164
  • 1
    Ah, it depends. Sometimes exact binary equality is enough or even required. And many "magic" constants (like 0, 1, integers) are exact, anyway and you don't want nearly equal values to be treated equal. These might be regarded special cases, but they're not that rare. So I wouldn't always call `==` the wrong solution, but weight it against the situation. Of course this needs some more acquaintance with inexact floating point representations. – Christian Rau Jul 29 '11 at 13:16
  • 1
    The problem with the difference (first version) is that it doesn't give you the same measure of "closeness" across all scales. Numbers near zero will need a much finer `precision_requirement` than numbers near the extreme ends of the range. – Kerrek SB Jul 29 '11 at 13:21
  • 4
    This is not a correct answer. First of all, the precision requirement might be smaller than epsilon. Second, you have to scale by the max of f1 and f2, not min. – Don Reba Jul 29 '11 at 13:30