Possible Duplicate:
Is JavaScript's Math broken?
Dealing with accuracy problems in floating-point numbers
Consider the code below and its output:
#include <iostream>
#include <iomanip>
#include <cstdlib>
#include <cmath>
#include <limits>
#include <vector>
int main(int argc, char *argv[])
{
double xleft = 0;
double xright = 1.0;
double dx = 0.1;
std::cout << std::setprecision(36) << "dx is " << dx << std::endl;
int numgridpts = ((int) ceil (( xright - xleft )/dx)) + 1;
for (int i = 0; i < numgridpts ; ++i)
{
std::cout << std::setprecision(36) << xleft + i*dx << std::endl;
}
return 0;
}
[~:ICgen/$ ./a.out
dx is 0.100000000000000005551115123125782702
0
0.100000000000000005551115123125782702
0.200000000000000011102230246251565404
0.300000000000000044408920985006261617
0.400000000000000022204460492503130808
0.5
0.600000000000000088817841970012523234
0.700000000000000066613381477509392425
0.800000000000000044408920985006261617
0.900000000000000022204460492503130808
1
My question is when I print out the numbers till a precision of 36 bits, why are the numbers, 0 , 0.5 and 1.0 represented exactly, wherars the other numbers seem to have some garbage numbers placed at the end?
Also if I add the floating point representations of 0.2 and 0.1 as shown in the output above, they dont seem to add up to the representation of 0.3, in the part of the garbage -bits.
I am using Linux Ubuntu 10.10 and the gcc compiler