Sorry if this is a repeat, but this is killing me and most of the .NET 4.0 vs 4.5 articles I find degenerate into screaming contests so I figured I would ask here.
Basically did they make any changes to floating point arithmetic from .NET 4.0 to 4.5?
To expound, we have a (mostly) C# application that stores coordinates in 3 dimensions as floats and performs some sine and cosine operations on them. I can't be too specific for business reasons and lack of access to source code. We compiled it in Visual Studio 2010 SP 1 using .NET 4.0 and developed automated unit tests for this compilation and they all passed. After installing 2012, some of the unit tests were failing based on precision of floating point variables. We are losing something like 6 digits off of our expected results according to our logs. Our current hypothesis is that the register size change from 86 bits to 64 bits has something to do with it, but the fact that the unit tests are breaking in both 2010 and 2012 leads us to believe that it has something to do with one of the hidden bug fixes in .NET 4.5. I am at the end of my rope and have been hitting my head against this problem for at least 3 days and two of my coworkers have been working it for longer.
Thanks guys. Hopefully I have given enough information.