for example, I have a code like this
double a = 3;
double b = 1.1;
double c = a * b;
then in the single-step debugging, we can watch that the value of C is not pure 3.3. it will be 3.3000000000000003
is anyone can give me an explanation on this precision's value?