2

Given the following test:

[Fact]
public void FactMethodName()
{
    var d = 6.4133;
    var actual = d.ToString("R");
    Assert.Equal("6.4133", actual);
}

It is passed on x86 but not on Any CPU or x64:

Assert.Equal() Failure
Position: First difference is at position 5
Expected: 6.4133
Actual:   6.4132999999999996

The question is why that happens? Note that not all double values behave this way.

I understand about issues with floating point. No need to point me to wikipedia. No need to point out that test is incorrect -- it just illustrates the problem -- change it to Console.WriteLine(..); if you will.

UPDATE I removed mentions of test runners becasue those details turned out to be irrelevant.

the_joric
  • 11,986
  • 6
  • 36
  • 57
  • Could it be the implementation of Assert.Equals? – Nick Sep 17 '12 at 13:42
  • The floating point representation of 6.4133 is 6.4132999999999996. The expected value is not correct, as there is no way to represent 6.4133 **exactly** with IEEE floating point math. – John Alexiou Sep 17 '12 at 13:50
  • Relevant reading here, http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – John Alexiou Sep 17 '12 at 13:53
  • On x86, 80bit floats are used for intermediate results, vs 64bit on x64. Could have something to do with that. – harold Sep 17 '12 at 15:30
  • @harold - This has nothing to do with the differences between x86 or x64 the author's unit test is simply wrong. The expected value is simply wrong. A `double` variable will have the same value on x86 or x64 in a case like this. The author should limit the output of the `ToString()` to 4 decmial places. This would correct the error in their unit test. – Security Hound Sep 17 '12 at 16:36
  • @Ramhound well that's disappointing. Suddenly this isn't interesting anymore. – harold Sep 17 '12 at 16:38
  • @harold - I did make one mistake in a statement. Looking at what the author said, it appears `var` would be a `float` not `double` in the end 32-bit and 64-bit floating percision would act the same on either platform ( at least in a case like this ). This is simply a case of rounding of a `float` and `double` the underline behavior is the same. – Security Hound Sep 17 '12 at 16:44
  • 1
    @Ramhound how is it a float? It doesn't have an "f" suffix. And anyway the author asserts that there *is* a difference between x86 and x64, so we cannot conclude that there isn't. – harold Sep 17 '12 at 17:09
  • 1
    @Ramhound seems like you are wrong. My unit test is just showing the problem. The question is why `double` is formatting differently on x64 and x86. And yes it is always `double` -- either on x86 or on x64. – the_joric Sep 18 '12 at 13:25
  • Did you ever figured out what was the reason for this behavior? – Evk Mar 12 '17 at 14:45

2 Answers2

2

I think the secret is in using "R" format string (see more about this)

"When a Single or Double value is formatted using this specifier, it is first tested using the general format, with 15 digits of precision for a Double and 7 digits of precision for a Single. If the value is successfully parsed back to the same numeric value, it is formatted using the general format specifier. If the value is not successfully parsed back to the same numeric value, it is formatted using 17 digits of precision for a Double and 9 digits of precision for a Single."

Max
  • 156
  • 5
  • OK, but how this is related to x64 vs x86? – the_joric Sep 18 '12 at 18:55
  • 1
    Obviously this statement "If the value is not successfully parsed back to the same numeric value, it is formatted using 17 digits of precision for a Double" is true only for x64 platform for some reason. You can notice that d.ToString("G17") is exactly 6.4132999999999996 on both platforms. Unfortunately the source code for Number.cs (this class formats numbers to string) is not accessible. So I can't say which operations exactly lead to this behaviour. – Max Sep 19 '12 at 07:41
-1

As Raj and ja72 point out, the issue is do to with numeric rounding, and I realise your test is just an illustration of the problem, but in a real world test, you should avoid these logic errors. In particular avoid casting to string or calling any other method that may have side effects that can taint your test's success.

Unfortunately this is commonly referred to as a fragile test. It works on some machines, some of the time. If you are working in a development team (particularly one with a build server or and offshore, or near shore team) then tests like this can be worthy of the Works on my machine award.

AlSki
  • 6,868
  • 1
  • 26
  • 39
  • It is not a fragile test. Behavior is the same on ALL machines with 64 bit Windows. Difference is between x86 and x64 and I am wondering what is caused by. – the_joric Sep 17 '12 at 15:27
  • I do understand your problem with why the difference is occurring, and I assume it has do with the greater accuracy of x64 over x86. In my current team, we have devs spread over XP and Win7, x86 and x64, oh and a Win2K3 TeamCity build server on x64. Some of us use nCrunch, others Resharper. For my team that test would meet our definition of "fragile", since it would break for some devs not others. – AlSki Sep 17 '12 at 15:51