My algorithm is calculating the epsilon for single precision floating point arithmetic. It is supposed to be something around 1.1921e-007. Here is the code:
static void Main(string[] args) {
// start with some small magic number
float a = 0.000000000000000013877787807814457f;
for (; ; ) {
// add the small a to 1
float temp = 1f + a;
// break, if a + 1 really is > '1'
if (temp - 1f != 0f) break;
// otherwise a is too small -> increase it
a *= 2f;
Console.Out.WriteLine("current increment: " + a);
}
Console.Out.WriteLine("Found epsilon: " + a);
Console.ReadKey();
}
In debug mode, it gives the following reasonable output (abbreviated):
current increment: 2,775558E-17
current increment: 5,551115E-17
...
current increment: 2,980232E-08
current increment: 5,960464E-08
current increment: 1,192093E-07
Found epsilon: 1,192093E-07
However, when switching to release mode (no matter with/ Without optimization!), the code gives the following result:
current increment: 2,775558E-17
current increment: 5,551115E-17
current increment: 1,110223E-16
current increment: 2,220446E-16
Found epsilon: 2,220446E-16
which corresponds to the value for double precision. So I assume, some optimizations cause the computations to be done on double values. Of course the result is wrong in this case!
Also: this happens only, if targeting X86 Release in the project options. Again: optimization on/off does not matter. I am on 64 bit WIN7, VS 2010 Ultimate, targeting .NET 4.0.
What might cause that behaviour? Some WOW issue? How to get around it in a reliable way? How to prevent the CLR to generate code which makes use of double precision instead of single precision calculations?
Note: switching to "Any CPU" or even "X64" as platform target is no option - even if the problem does not occur here. But we have some native libraries, in different versions for 32/64 bit. So the target must be specific.