So I have a bit of a WTF on my hands: Double precision math is returning different results based on which thread it runs on.
Code:
double d = 312554083.518955;
Console.WriteLine(d);
d += 0.1d;
Console.WriteLine(d);
d = 2554083.518955;
Console.WriteLine(d);
d += 0.1d;
Console.WriteLine(d);
This prints:
312554083,518955
312554080
2554083,518955
2554083,5
but if I execute it on a sparkling new thread it returns:
312554083,518955
312554083,618955
2554083,518955
2554083,618955
(Which, you know, is the correct results)
As you can see, something cuts off anything past eight digits, be it decimals or digits. I am running a fair bit of native code on the thread that is returning incorrect results (DirectX (SlimDX), Freetype2, FMOD); maybe they're configuring the FPU to do this or something. This code, however, is pure C# - and the MSIL it compiles to is the same regardless of which thread it runs on.
Has anyone seen something like this before? What can the cause be?