3

My algorithm is calculating the epsilon for single precision floating point arithmetic. It is supposed to be something around 1.1921e-007. Here is the code:

static void Main(string[] args) {
    // start with some small magic number
    float a = 0.000000000000000013877787807814457f;
    for (; ; ) {
        // add the small a to 1
        float temp = 1f + a;
        // break, if a + 1 really is > '1'
        if (temp - 1f != 0f) break;
        // otherwise a is too small -> increase it
        a *= 2f;
        Console.Out.WriteLine("current increment: " + a); 
    }
    Console.Out.WriteLine("Found epsilon: " + a); 
    Console.ReadKey(); 
}

In debug mode, it gives the following reasonable output (abbreviated):

current increment: 2,775558E-17
current increment: 5,551115E-17
...
current increment: 2,980232E-08
current increment: 5,960464E-08
current increment: 1,192093E-07
Found epsilon: 1,192093E-07

However, when switching to release mode (no matter with/ Without optimization!), the code gives the following result:

current increment: 2,775558E-17
current increment: 5,551115E-17
current increment: 1,110223E-16
current increment: 2,220446E-16
Found epsilon: 2,220446E-16

which corresponds to the value for double precision. So I assume, some optimizations cause the computations to be done on double values. Of course the result is wrong in this case!

Also: this happens only, if targeting X86 Release in the project options. Again: optimization on/off does not matter. I am on 64 bit WIN7, VS 2010 Ultimate, targeting .NET 4.0.

What might cause that behaviour? Some WOW issue? How to get around it in a reliable way? How to prevent the CLR to generate code which makes use of double precision instead of single precision calculations?

Note: switching to "Any CPU" or even "X64" as platform target is no option - even if the problem does not occur here. But we have some native libraries, in different versions for 32/64 bit. So the target must be specific.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
user492238
  • 4,094
  • 1
  • 20
  • 26
  • 2
    http://stackoverflow.com/questions/2342396/why-does-this-floating-point-calculation-give-different-results-on-different-mach/2343351#2343351 (which also links to http://stackoverflow.com/questions/2345534/c-xna-visual-studio-difference-between-release-and-debug-modes and http://stackoverflow.com/questions/2225503/clr-jit-optimizations-violates-causality) - basically, this is expected. – Marc Gravell Jun 08 '11 at 10:59
  • Also note that `float.Epsilon == 1,401298E-45` – H H Jun 08 '11 at 11:11
  • @henk-holterman this epsilon actually is different. The epsilon here is the difference between 1 and the smallest next larger number - in the selected precision. – user492238 Jun 08 '11 at 11:31
  • @Marc Gravell thanks for the links. It lead me to the corresponding C# specification paragraph. Unfortunately, this somehow disqualifies C# from a language for numerical computations - which is something, well at least I did not expect. Thanks. – user492238 Jun 08 '11 at 11:34
  • 1
    @user492238 meh - I would argue that... when it comes to FP rounding, it it hard to say any is "correct", and you seem to be arguing that it is wrong in attempting to be *more precise*... Actually, as indicated in the posts - you can get around this by forcing the value down to a ***field*** - I will add an answer. – Marc Gravell Jun 08 '11 at 11:37
  • @user492 : I would say `float` is disqualified from serious numerical computations. It is a historical left-over and should only be considered when storing very large arrays. So: why are you not using `double`? – H H Jun 08 '11 at 11:41
  • @Henk Holterman 1. because people are using float, not only in graphics they do. 2. C# does not promise to keep on double. It may use even larger precision if available - without notice. Since the user must handle FP arithmetic anyway (for serious problems) she should know, what/when/how to pay attention to. Not easy without knowing the target precision. – user492238 Jun 08 '11 at 11:45
  • Most FP code I write is quite agnostic of the available precision. I sometimes have a lower limit, but I have trouble placing your upper limit in some real scenario. Esp for (GDI) graphics. – H H Jun 08 '11 at 11:50
  • @Henk Holterman knowing eps and some other machine constants for the current platform is crucial for error estimation. – user492238 Jun 08 '11 at 11:58
  • 1
    I would guess that the problem may be even "worse". The FPU on the x86 platform is using 80-bit registers internally and I would expect any calculations where intermediate results are kept in an FPU registry to be performed using 80 bit precision. SSE optimizations may possibly affect the actual precision used however. – SoftMemes Jun 08 '11 at 12:03
  • @user49, can't your problem-domain deliver an Epsilon? And worst-case, you can store it in a var, once. – H H Jun 08 '11 at 12:15
  • @Henk Holterman Hm. but that way we are only pushing the problem to somewhere else. Where should the problem domain get it from? It is faced with the same problem. Knowing eps for a specific precision is easy but it is important to assign a certain eps at runtime to a certain precision. And this of course only applies, if the calculation is really done in that precision than. – user492238 Jun 08 '11 at 12:21

1 Answers1

3

As discussed in the comments, this is expected. It can be side-stepped by removing the JIT's ability to keep the value in a register (which will be wider than the actual value) - by forcing it down to a field (which has clearly-defined size):

class WorkingContext
{ 
    public float Value; // you'll excuse me a public field here, I trust
    public override string ToString()
    {
        return Value.ToString();
    }
}
static void Main()
{
    // start with some small magic number
    WorkingContext a = new WorkingContext(), temp = new WorkingContext();
    a.Value = 0.000000000000000013877787807814457f;
    for (; ; )
    {
        // add the small a to 1
        temp.Value = 1f + a.Value;
        // break, if a + 1 really is > '1'
        if (temp.Value - 1f != 0f) break;
        // otherwise a is too small -> increase it
        a.Value *= 2f;
        Console.Out.WriteLine("current increment: " + a);
    }
    Console.Out.WriteLine("Found epsilon: " + a);
    Console.ReadKey();
}

Interestingly, I tried this with a struct first, but the JIT was able to see past my cheating (presumably because it is all on the stack).

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • Thanks! It works - for the current setup. Too bad, we cannot trust it for future situations though. I personally consider this part of the C# spec as a mistake. But for now, again, it works :) – user492238 Jun 08 '11 at 12:08
  • @user492338 it isn't an uncommon approach; you may find similar behaviour in other languages/platforms – Marc Gravell Jun 08 '11 at 15:15