10

I am using .NET 2.0 with PlatformTarget x64 and x86. I am giving Math.Exp the same input number, and it returns different results in either platform.

MSDN says you can't rely on a literal/parsed Double to represent the same number between platforms, but I think my use of Int64BitsToDouble below avoids this problem and guarantees the same input to Math.Exp on both platforms.

My question is why are the results different? I would have thought that:

  • the input is stored in the same way (double/64-bit precision)
  • the FPU would do the same calculations regardless of processor's bitness
  • the output is stored in the same way

I know I should not compare floating-point numbers after the 15/17th digit in general, but I am confused about the inconsistency here with what looks like the same operation on the same hardware.

Any one know what's going on under the hood?

double d = BitConverter.Int64BitsToDouble(-4648784593573222648L); // same as Double.Parse("-0.0068846153846153849") but with no concern about losing digits in conversion
Debug.Assert(d.ToString("G17") == "-0.0068846153846153849"
    && BitConverter.DoubleToInt64Bits(d) == -4648784593573222648L); // true on both 32 & 64 bit

double exp = Math.Exp(d);

Console.WriteLine("{0:G17} = {1}", exp, BitConverter.DoubleToInt64Bits(exp));
// 64-bit: 0.99313902928727449 = 4607120620669726947
// 32-bit: 0.9931390292872746  = 4607120620669726948

The results are consistent on both platforms with JIT turned on or off.

[Edit]

I'm not completely satisfied with the answers below so here are some more details from my searching.

http://www.manicai.net/comp/debugging/fpudiff/ says that:

So 32-bit is using the 80-bit FPU registers, 64-bit is using the 128-bit SSE registers.

And the CLI Standard says that doubles can be represented with higher precision if the hardware supports it:

[Rationale: This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the Partition I 69 same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions. end rationale]

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf (12.1.3 Handling of floating-point data types)

I think this is what is happening here, because the results different after Double's standard 15 digits of precision. The 64-bit Math.Exp result is more precise (it has an extra digit) because internally 64-bit .NET is using an FPU register with more precision than the FPU register used by 32-bit .NET.

Yoshi
  • 3,325
  • 1
  • 19
  • 24
  • +1 Interesting. I see the exact same symptoms on my machine, and switching between x86/anycpu changes the output. – sisve Oct 25 '10 at 21:23
  • 1
    Your final paragraph is incorrect. The 32-bit version will be **more correct** because it uses the 80-bit extended precision x87 FPU, whereas 64-bit version will use the faster and more consistent SSE2. – phuclv Jan 01 '17 at 09:56
  • 1
    Possible duplicate of [Difference in floating point arithmetics between x86 and x64](http://stackoverflow.com/questions/22710272/difference-in-floating-point-arithmetics-between-x86-and-x64) – phuclv Jan 01 '17 at 10:38
  • so many duplicates: [C# - Inconsistent math operation result on 32-bit and 64-bit](http://stackoverflow.com/q/2461319/995714), [Why would the same code yield different numeric results on 32 vs 64-bit machines?](http://stackoverflow.com/q/7847274/995714), [Floating point calculation change depending on the compiler](http://stackoverflow.com/q/2376247/995714), [Why does this floating-point calculation give different results on different machines?](http://stackoverflow.com/q/2342396/995714), [Floating point mismatch between compilers](http://stackoverflow.com/q/18494237/995714) – phuclv Jan 01 '17 at 10:44

2 Answers2

4

Yes rounding errors, and it is effectively NOT the same hardware. The 32 bit version is targeting a different set of instructions and register sizes.

Gabe
  • 84,912
  • 12
  • 139
  • 238
winwaed
  • 7,645
  • 6
  • 36
  • 81
  • 1
    That's interesting - are you saying that there are a different set of FPU instructions? Admittedly I don't know how Math.Exp is implemented, whether it's one FPU instruction or many. And I would have thought the FPU registers are the same in both platforms because I'm using the 'double' type. – Yoshi Oct 25 '10 at 21:34
  • I don't know the minutae of the .NET implementation or the x64 fpu, but I would not have expected them to have been identical. You are also converting from int to double which is introducing an error. – winwaed Oct 25 '10 at 22:09
  • 1
    I'm going to mark this as the answer because I think it provides the most detail. I found more information at this URL, which explains that 32-bit .NET is using 80-bit FPU registers, and 64-bit .NET is using 128-bit SSE registers: http://www.manicai.net/comp/debugging/fpudiff/ – Yoshi Oct 26 '10 at 00:37
  • @Yoshi SSE2 registers are SIMD registers, i.e. they support multiple data in that 128-bit registers. It doesn't mean that it's a single 128-bit value. Instead that's two 64-bit values (with the upper one left unused if the code is not vectorized) so the precision will be slower than in 80-bit x87 – phuclv Jan 01 '17 at 09:56
2

With the Double type you will get rounding errors, as fractions in binary get very large very quickly. It would possibly help if you used the Decimal type.

Steve Ellinger
  • 3,957
  • 21
  • 19
  • I (think) I understand that, but any rounding errors that occur on the same calculation on the same input on the same hardware should at least be consistent, right? Or is there no guarantee of that due to some other factors? – Yoshi Oct 25 '10 at 21:16