The behavior of these two programs will vary between computers and operating systems - you are testing for exact equality of floats
.
In memory, floats
are stored as a string of bits in binary
- i.e. 0.1 in binary (0.1b) represents 0.5 in decimal (0.5d).
Similarly,
Binary | Decimal
0.1 | 2^-1 = 1/2
0.01 | 2^-2 = 1/4
0.001 | 2^-3 = 1/8
0.11 | 2^-1 + 2^-2 = 3/4
The problem is that some decimals don't have nice floating point representations.
0.1d = 0.0001100110011001100110011...
which is infinitely long.
So, 0.5 is really nice in binary
0.5d = 0.1000000000000000...b
but 0.1 is really nasty
0.1d = 0.00011001100110011...
Now depending on your compiler, it may assume that 0.1f is a double
type, which stores more of the infinite sequence of 0.0001100110011001100110011001100110011...
so it is not equal to the float
version, which truncates the sequence much earlier.
On the other hand, 0.5f is the same regardless of how many decimal places are stored, since it has all zeroes after the first place.
The accepted way to compare float
s or double
s in C++ or C is to #define
a very small number (I like to call it EPS, short for EPSILON) and replace
float a = 0.1f
if (a == 0.1f) {
printf("IF\n")
} else {
printf("ELSE\n")
}
with
#include <math.h>
#define EPS 0.0000001f
float a = 0.1f
if (abs(a - 0.1f) < EPS) {
printf("IF\n")
} else {
printf("ELSE\n")
}
Effectively, this tests if a is 'close enough' to 0.1f instead of exact equality. For 99% of applications, this approach works just fine, but for super-sensitive calculations some stranger tricks are needed that involve using long double
, or defining a custom data type.