This is astounding that this happens and I don't have an explanation. A simple console application as created by visual studio:
#include <stdlib.h>
#include <crtdbg.h>
#include "stdafx.h"
#include "ConsoleApplication.h"
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
double a;
a = 3.4;
return 0; // <======= Debug break point set here
}
The value of a
is 3.3999999999999999 instead of 3.4. To most programmers this isn't an issue, but why can't the number 3.4 be stored exactly as a double? Thinking about the binary it doesn't make sense.