Why result of this code is false? I can't solve this problem.
#include <stdio.h>
int main(int argc, char **argv)
{
if ( (1.1 - 1.0)*10.0 - 1.0 == 0.0 )
printf("True");
else
printf("False");
return 0;
}
Why result of this code is false? I can't solve this problem.
#include <stdio.h>
int main(int argc, char **argv)
{
if ( (1.1 - 1.0)*10.0 - 1.0 == 0.0 )
printf("True");
else
printf("False");
return 0;
}
Chasing equality in floating point is mostly a fools game.
Best you can do is decide upon a delta that's 'close enough' and compare with that.
google told me to read this for more information: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
But this works as you wanted:
int main(int argc, char **argv)
{
float x,y;
x=0.0;
y=0.0;
x=1.1 - 1.0;
x=x*10.0;
x=x-1.0;
if ( x==y )
printf("True");
else
printf("False");
return 0;
}
Most double
math uses a binary floating point representation. So 1.1
is not exactly representable - just something close to it. Consider the following which uses volatile
to prevent compiler optimizations.
#include <stdio.h>
int main(void) {
volatile double wpw = 1.1;
volatile double one = 1.0;
volatile double ten = 10.0;
printf("%.20e\n", wpw);
printf("%.20e\n", wpw - one);
printf("%.20e\n", (wpw - one) * ten);
printf("%.20e\n", (wpw - one) * ten - one);
return 0;
}
And it output below. 1.1 - 1.0
is only approximately 0.1
.
1.10000000000000008882e+00
1.00000000000000088818e-01
1.00000000000000088818e+00
8.88178419700125232339e-16