0

Q1: For what reason isn't it recommended to compare floats by == or != like in V1?
Q2: Does fabs() in V2 work the same way, like I programmed it in V3?
Q3: Is it ok to use (x >= y) and (x <= y)?
Q4: According to Wikipedia float has a precision between 6 and 9 digits, in my case 7 digits. So on what does it depend, which precision between 6 and 9 digits my float has? See [1]


[1] float characteristics

Source: Wikipedia
Type   | Size              | Precision           | Range
Float  | 4Byte ^= 32Bits   | 6-9 decimal digits  | (2-2^23)*2^127
Source: tutorialspoint
Type   | Size              | Precision           | Range
Float  | 4Byte ^= 32Bits   | 6 decimal digits    | 1.2E-38 to 3.4E+38
Source: chortle
Type   | Size              | Precision           | Range
Float  | 4Byte ^= 32Bits   | 7 decimal digits    | -3.4E+38 to +3.4E+38

The following three codes produce the same result, still it is not recommended to use the first variant.

1. Variant

#include <stdio.h>    // printf() scanf()
int main()
{
  float a = 3.1415926;
  float b = 3.1415930;

  if (a == b)
  {
    printf("a(%+.7f) == b(%+.7f)\n", a, b);
  }
  if (a != b)
  {
    printf("a(%+.7f) != b(%+.7f)\n", a, b);
  }
  return 0;
}

V1-Output:

a(+3.1415925) != b(+3.1415930)

2. Variant

#include <stdio.h>    // printf() scanf()
#include <float.h>    // FLT_EPSILON == 0.0000001
#include <math.h>     // fabs()
int main()
{
  float x = 3.1415926;
  float y = 3.1415930;

  if (fabs(x - y) < FLT_EPSILON)
  {
    printf("x(%+.7f) == y(%+.7f)\n", x, y);
  }
  if (fabs(x - y) > FLT_EPSILON)
  {
    printf("x(%+.7f) != y(%+.7f)\n", x, y);
  }
  return 0;
}

V2-Output:

x(+3.1415925) != y(+3.1415930)

3. Variant:

#include <stdio.h>    // printf() scanf()
#include <float.h>    // FLT_EPSILON == 0.0000001
#include <stdlib.h>   // abs()
int main()
{
  float x = 3.1415926;
  float y = 3.1415930;

  const int FPF = 10000000;   // Float_Precission_Factor
  if ((float)(abs((x - y) * FPF)) / FPF < FLT_EPSILON)   // if (x == y)
  {
    printf("x(%+.7f) == y(%+.7f)\n", x, y);
  }
  if ((float)(abs((x - y) * FPF)) / FPF > FLT_EPSILON)   // if (x != y)
  {
    printf("x(%+.7f) != y(%+.7f)\n", x, y);
  }
  return 0;
}

V3-Output:

x(+3.1415925) != y(+3.1415930)

I am grateful for any help, links, references and hints!

PatrickSteiner
  • 495
  • 4
  • 26
  • 2
    https://stackoverflow.com/questions/588004/is-floating-point-math-broken – yano Oct 24 '17 at 21:03
  • Try this: `float f; for(f = 0.0; f != 1.0; f += 0.1) printf("%.1f\n");` – Steve Summit Oct 24 '17 at 22:31
  • 1
    You don't usually have a problem when your numbers are unequal. The problems arise when you have two floating-point numbers that you *think* are (or ought to be) equal, but `==` and `!=` say they're not equal. – Steve Summit Oct 24 '17 at 22:36
  • @SteveSummit Thank you for the example, now I see the problem, it ignores the stop condition. – PatrickSteiner Oct 25 '17 at 08:09

3 Answers3

2

When working with floating-point operations, almost every step may introduce a small rounding error. Convert a number from decimal in the source code to the floating-point format? There is a small error, unless the number is exactly representable. Add two numbers? Their exact sum often has more bits than fit in the floating-point format, so it has to be rounded to fit. The same is true for multiplication and division. Take a square root? The result is usually irrational and cannot be represented in the floating-point format, so it is rounded. Call the library to get the cosine or the logarithm? The exact result is usually irrational, so it is rounded. And most math libraries have some additional error as well, because calculating those functions very precisely is hard.

So, let’s say you calculate some value and have a result in x. It has a variety of errors incorporated into it. And you calculate another value and have a result in y. Suppose that, if calculated with exact mathematics, these two values would be equal. What is the chance that the errors in x and y are exactly the same?

It is unlikely. If x and y were calculated in different ways, they experienced different errors, and it is essentially chance whether they have the same total error or not. Therefore, even if the exact mathematical results would be equal, x == y may be false because of the errors.

Similarly, two exact mathematical values might be different, but the errors might coincide so that x == y returns true.

Therefore x == y and x != y generally cannot be used to tell if the desired exact mathematical values are equal or not.

What can be used? Unfortunately, there is no general solution to this. Your examples use FLT_EPSILON as an error threshold, but that is not useful. After doing more than a few floating-point operations, the error may easily accumulated to be more than FLT_EPSILON, either as an absolute error or a relative error.

In order to make a comparison, you need to have some knowledge about how large the accumulated error might be, and that depends greatly on the particular calculations you have performed. You also need to know what the consequences of false positives and false negatives are—is it more important to avoid falsely stating two things are equal or to avoid falsely stating two things are unequal? These issues are specific to each algorithm and its data.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • So that means the way of handling rounding errors always depends on the situation and there is no general approach? – PatrickSteiner Oct 25 '17 at 06:50
  • 1
    Yes, pretty much. Floating point is widely used to calculate approximations. Using it for exact or very precise work requires engineering effort. – Eric Postpischil Oct 25 '17 at 11:51
  • Nice answer. And as a very simple example of precisely what you were talking about, I tried `double x = 37; double y = pow(sqrt(x), 2); if(x == y) printf("equal\n"); else printf("unequal\n");`, and sure enough, it printed `unequal`. – Steve Summit Oct 25 '17 at 11:59
1

Because on 64 bit machine you will find out that 0.1*3 = 0.30000000000000004 :-)

See the links @yano and @PM-77-1 provided as comments.

Doncho Gunchev
  • 2,159
  • 15
  • 21
-2

You know machine stores everything using 0 and 1. Also know that not every floating point value is representable in binary within a limited bits. Computers stores possible nearest representable binary of the given numbers.

So their is a difference between 2.0000001 and 2.0000000 in the eye of computer (but we say they are equal!).

Not always this trouble appears, but it is risky.

Chitholian
  • 432
  • 1
  • 10
  • 19