1
static void Main(string[] args)
{
    double a = 222.65;
    double b = 0.056124761669643426;
    double c = (double) ((decimal) a*(decimal) b);
}

Why these calculations give different results on different operating systems? This part gives always the same result:

((decimal) a*(decimal) b)

After casting to double I get either:

12.496178185746102

or

12.496178185746105

My problem is that this minor change has big impact on the result and tests fail.

Now important information:

  1. Project is built with .NET 4.0 on the same machine.
  2. Both machines have got .NET 4.0 and .NET 4.5.2 installed.
  3. Projects are run as x86 application.
  4. The first result I get on machines with Windows 7, Windows Server 2003, Windows Server 2008 installed.
  5. The second result I get on machines with Windows Server 2012, Windows 10 installed.
  6. I am not sure about CLR version but I suppose it comes with .NET so should be the same.

It seems that something changed after Windows 8, Windows Server 2012 (both were released together). I've always thought that results may be impacted only by .NET version. Any ideas?

Edit: Due to misinformation from my side there is an example:

 double a = 222.65;
 double b = 0.056124761669643426;
 double c = (double) ((decimal) a*(decimal) b);

  double result = Process(c) <- doing something very complicated

Assert.That(result,expectedResult,tolerance=1E-8) <- here is an impact
MistyK
  • 6,055
  • 2
  • 42
  • 76
  • http://stackoverflow.com/questions/803225/when-should-i-use-double-instead-of-decimal – Steve Sep 08 '15 at 17:16
  • 2
    Have you checked if CPU are identical? (I assume you know that "equals" does not really work for floating point arithmetic - covered in many questions i.e. http://stackoverflow.com/questions/17404513/floating-point-equality-and-tolerances) – Alexei Levenkov Sep 08 '15 at 17:17
  • Weird, even the regular windows calculator cannot even get to more than 13 decimals, your number have 15 decimals... – Antoine Pelletier Sep 08 '15 at 17:18
  • @AntoinePelletier Maybe calc trims off the last few digits as it's probably not very important and hurts comparisons? – Thraka Sep 08 '15 at 17:19
  • You might try using [Convert.ToDouble](https://msdn.microsoft.com/en-us/library/w2zyd0fa%28v=vs.110%29.aspx) instead of casting explicitly to double. It's possible this will give different results. – Dan Bryant Sep 08 '15 at 17:19
  • @DanBryant what is the difference behind the scenes? – MistyK Sep 08 '15 at 17:20
  • @Zbigniew, This is just a hunch, but the MSDN article on [explicit conversion](https://msdn.microsoft.com/en-us/library/44xkhh41%28v=vs.110%29.aspx) says that different .NET languages are allowed to implement the conversion differently and hence yield different results. That shouldn't impact the result of using the same language if two CLR versions are identical, but it's possible there are slight changes due to service packs, perhaps modifying the jitter, perhaps enabling CPU instructions that were previously not used? – Dan Bryant Sep 08 '15 at 17:23
  • @AlexeiLevenkov What can imply different results when it comes to CPU's? Yes equality is known problem but in here I don't compare anything. – MistyK Sep 08 '15 at 17:23
  • @DanBryant I'll try to use it and see if it helps – MistyK Sep 08 '15 at 17:25
  • @Zbigniew - if everything else is the same CPU should be causing the difference... On comparison - how you "tests fail" happen than? Showing code that actually is a problem can help someone to come up with exact answer. – Alexei Levenkov Sep 08 '15 at 17:28
  • @AlexeiLevenkov The code is restricted and the process is not simple to show it. By the way it's not important in this case. I just wonder what is the impact of CPU. Does it mean that any two CPUs may cause different results? I think it is more related with operating system rather than CPU – MistyK Sep 08 '15 at 17:31
  • Can you add disaasembly from both machines? – Timur Mannapov Sep 08 '15 at 17:52
  • @Zbigniew You mention this changed in Windows Server 2012. The main thing I can see is a switch to 64 bit processors vs 32 bit for Windows Server 2003. That would most likely be where the difference is coming from as the processor architecture is different. – RubberChickenLeader Sep 08 '15 at 17:55
  • Same problem [as this one](http://stackoverflow.com/a/27636509/17034). When you get 102 as the last three digits then the FPU is operating in RoundingMode.Down instead of Near. Finding the evil code that does that is something we can't help you with. – Hans Passant Sep 08 '15 at 18:02
  • @Zbigniew not sure how SO can help you than. Floating point calculations are not exact and can vary between environments within error margin. Since your test rely on some proprietary code to validate conditions you have to figure out why that code does not properly take error margins into account yourself. – Alexei Levenkov Sep 08 '15 at 18:02
  • Check Eric Lippert's answer in http://stackoverflow.com/questions/8795550/casting-a-result-to-float-in-method-returning-float-changes-result/8795656#8795656 . Seems like very similar case. – Timur Mannapov Sep 08 '15 at 18:04

0 Answers0