Can any of you explain why does this happen?
static void Main()
{
const float xScaleStart = 0.5f;
const float xScaleStop = 4.0f;
const float xScaleInterval = 0.1f;
const float xScaleAmplitude = xScaleStop - xScaleStart;
const float xScaleSizeC = xScaleAmplitude / xScaleInterval;
float xScaleSize = xScaleAmplitude / xScaleInterval;
Console.WriteLine(">const float {0}, (int){1}", xScaleSizeC, (int)xScaleSizeC);
Console.WriteLine("> float {0}, (int){1}", xScaleSize, (int)xScaleSize);
Console.ReadLine();
}
Output:
>const float 35, (int)34
> float 35, (int)35
I know that the binary representation of 0.1 is actually 0.09999990463256835937, though why does this happen using 'const float' and not with 'float'? Is this considered a compiler bug?
For the record, the code compiles into:
private static void Main(string[] args)
{
float xScaleSize = 35f;
Console.WriteLine(">const float {0}, (int){1}", 35f, 34);
Console.WriteLine("> float {0}, (int){1}", xScaleSize, (int)xScaleSize);
Console.ReadLine();
}