I would like to know the mechanism why the compiler shows inaccurate value of float. Example
float a = 0.056;
printf("value = %f",a); // this prints "value = 0.056"
If you try to store 0.056 in binary floating point format, you get this (use this link for conversion)
0.00001110010101100000010000011000 which is equal to 0.0559999998658895
1. How the compiler shows 0.056 while it should show 0.055999999?
Lets take this example a little further
#include <stdio.h>
main()
{
float a, b;
a = 0.056;
b = 0.064; // difference is 0.08
printf("a=%f, b=%f",a,b);
if( b - a == 0.08) // this fails
printf("\n %f - %f == %f subtraction is correct",b,a,b-a);
else
printf("\n%f - %f != %f Subtraction has round-off error\n",b,a,b-a);
}
Note that else block gets execute here while we expect if block to be correct. Here is the output.
a=0.056000, b=0.064000
0.064000 - 0.056000 != 0.008000 Subtraction has round-off error
Again the values are shown the way we expect (with no round off error) but these values do have round off errors but false disguised values are shown. My second question is
2. Is there a way to show the actual value of the stored number rather than the disguised one that we entered?
Note: I have include C code in Visual Studio 2008 but it should be reproducible in any language.