8

We have some unit tests that check the result of the solution of linear system of equation, comparing floating point numbers with a delta.

Trying to adjust the delta, I noticed that the same number changes slightly between Visual Studio Run test and Debug test modes.

Why does this happen? When I debug a test the #if DEBUG sections are disabled, therefore the executed code should be the same.

Thanks.

abenci
  • 8,422
  • 19
  • 69
  • 134
  • Can you show code with an example? – Christian Phillips Sep 05 '13 at 08:18
  • 2
    If `#if DEBUG` are disabled, doesn't that mean you're building a release build typically? If so, then yes, there are slight differences in handling of floating point values between optimized and unoptimized code. – Lasse V. Karlsen Sep 05 '13 at 08:18
  • We are printing the expected values from a release build using `Console.WriteLine()` then comparing them with the 'Run Test' actual values and they are different, how can we print the same values? – abenci Sep 05 '13 at 08:22
  • 1
    You will have to run the tests on a release build (something you should do anyway). – Lasse V. Karlsen Sep 05 '13 at 08:23
  • Do you mean that when we select 'Debug test' the assemblies in `Bin\Debug` folder are used even if our current Visual Studio configuration is Release? – abenci Sep 05 '13 at 08:29

4 Answers4

11

For a simple example of code that produces different results between a typical DEBUG and RELEASE build (unoptimized vs. optimized), try this in LINQPad:

void Main()
{
    float a = 10.0f / 3;
    float b = 10;
    b /= 3;

    (a == b).Dump();
    (a - b).Dump();
}

If you execute this with optimizations on (make sure the little button all the way down to the right in the LINQPad window is turned to "/o+"), you'll get this result:

False
-7,947286E-08

If you disable it, turn off optimizations, you get this:

True
0

Note that the produced IL code is the same:

comparison between unoptimized and optimized

Note that the addresses differ, this might indicate that there are things here other than just pure IL, though I have no idea what that might be.

Lasse V. Karlsen
  • 380,855
  • 102
  • 628
  • 825
  • This is very interesting. What optimization causes this to happen? Are they just saving bits in a debug environment and not carrying the extra precision? – NWard Sep 05 '13 at 08:26
  • 2
    @NWard in debug, it needs to write the values down to the locals in order for them to be available for you to show in the debugger. This means that they can't be kept purely in registers, so the JIT needs to use a different approach. The registers are wider than the managed primitives, so it is writing the value to the locals that loses precision / accuracy. – Marc Gravell Sep 05 '13 at 08:32
  • We are always working in `Release` Visual Studio configuration. Why floating point result change from `Run test` and `Debug test`? – abenci Sep 05 '13 at 08:50
6

There are all sorts of things that can impact floating point computation, the most significant of which is whether it actually writes the value to a local/field or not. It is possible that for the optimized build, the JIT is able to keep the value in a register - the FPU registers are 80 bits wide, to minimize cumulative errors. If it needs to actually write the value down to a 32-bit (float) or 64-bit (double) local or field, it will by necessity lose some of this. So yes, if it can do all the work in registers - it can give a different (usually more "correct") result than if it writes the intermediate values to locals etc.

There are other available registers too, but I doubt these are in use here: XMM/SSE registers are 128 bit; SIMD can (depending on the machine) be up to 512 bit.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • @Alberto define "correct" and "expected" here. What values do you want to see? – Marc Gravell Sep 05 '13 at 08:34
  • Is there any way to get the correct expected values from Visual Studio Release builds? I mean something I can compare in unit tests. Here values from Visual Studio output, Run test and Debug test are different... – abenci Sep 05 '13 at 08:36
  • Whenever you say "correct" and "expected" you must define what is correct and declare your expectations. The two models are slightly different, there is nothing that indicates that one is wrong and the other is right. If the values are to be stored, you will lose that precision. The correct way here is to run your tests against the optimized code, something you should do anyway. Why test code you're not going to ship to your customer? It's the code you want to ship you should be testing. – Lasse V. Karlsen Sep 05 '13 at 08:40
  • @Alberto comparing floating point with equality is pretty much the definition of "doing it wrong". The only reliable/recommended way to compare floating points is to subtract them and check that the absolute value (i.e. non-negative) of the difference is less than some arbitrarily small number. Basically: "close enough" – Marc Gravell Sep 05 '13 at 08:40
  • @MarcGravell I understood from his question about using a delta that they're already doing that, but of course if the delta is too small, the two versions of the code will again produce different (enough) results. – Lasse V. Karlsen Sep 05 '13 at 08:41
  • 1
    @LasseV.Karlsen fair enough, I missed the "with a delta". But I would say in this case, then, "needs a bigger delta" - the delta needs to accommodate exactly what this is: round-off. – Marc Gravell Sep 05 '13 at 08:43
  • The results are too different, I cannot use a delta of 0.01 comparing 0.02 and 0.03, for us 0.02 is not equal to 0.03 – abenci Sep 05 '13 at 08:49
  • Is there any way to `Console.WriteLine()` something from Visual Studio closer to what MSTest computes? – abenci Sep 05 '13 at 08:52
  • @Alberto 0.01 vs 0.02 sounds like something way upstream is **very** sensitive - to the point where it is brittle and unsafe. This is nothing to do with "is the test rig right". It usually means, for example, that the order of operations is incorrect. You might want to consider whether re-ordering the operations to reduce the impact of round-off is possible. – Marc Gravell Sep 05 '13 at 11:11
2

If you run a build, it will be executed using full jit optimisation ie. at runtime the jit compiler will do clever things.

If you debug the same build jit optimisations will be turned off. Therefore different machine code instructions will be generated by the jit compiler.

Optimisations vary. One example is the storing of variables. Variables get stored in registers, not all registers are the same size. If code is optimised some steps may be removed or shuffled in order. Therefore the choice of register for a given operation may change. Therefore the accuracy of a stored value changes.

This leads to different outputs for floating point calculations.

Compilers often guarantee a minimum accuracy but rarely a maximum accuracy for intermediate steps.

See also CLR JIT optimizations violates causality?

Community
  • 1
  • 1
morechilli
  • 9,827
  • 7
  • 33
  • 54
  • Do you know a way to print results to the console window with full JIT optimization? Maybe I should create a text file so Visual Studio is not involved anymore? – abenci Sep 05 '13 at 09:00
  • I think you may be asking for the impossible. I think you are asking for the same floating point results with or without logging? Adding logging of any sort changes the code and so has the potential to change the floating point output. You most likely need to look at tolerances for how you assess the logging output. – morechilli Sep 05 '13 at 13:26
0

Even Visual Studio Ctrl+F5 and F5 yield different floating point values. The only option to print accurate values is to create a text file from your code while running in Release mode and without Visual Studio behind (Ctrl+F5). Different machines will produce different floating point values, so it's up to you where to generate it.

In this way all your floating point number will match exactly!

abenci
  • 8,422
  • 19
  • 69
  • 134