I am a part of a team which is developing a scientific software, so reproducibility of our results is our highest priority. We noticed that depends on which OS was used, the software generates slightly different results. I found out that in rare occasions exp and log functions give results with discrepancy on last bit of double.
Here I have an example on c#, but the same can be reproduced on at least c++ or python. All tests were done on the same machine.
using System;
namespace Test {
public class Program {
public static void Main() {
byte[] input = { 14, 243, 143, 0, 124, 41, 85, 64 };
double inputD = BitConverter.ToDouble(input, 0);
double outputD = Math.Exp(inputD);
byte[] output = BitConverter.GetBytes(outputD);
Console.WriteLine("Math.Exp(" + inputD + "\thex: " + BitConverter.ToString(input).Replace("-", " ") + ")\t=\t" +
outputD + "\thex: " + BitConverter.ToString(output).Replace("-", " "));
input = new byte[] { 198, 77, 75, 30, 56, 151, 18, 65 };
inputD = BitConverter.ToDouble(input, 0);
outputD = Math.Log(inputD);
output = BitConverter.GetBytes(outputD);
Console.WriteLine("Math.Log(" + inputD + "\thex: " + BitConverter.ToString(input).Replace("-", " ") + ")\t=\t" +
outputD + "\thex: " + BitConverter.ToString(output).Replace("-", " "));
}
}
}
Windows 10.0.15063, mono 5.2.0:
Math.Exp(84.6481934934384 hex: 0E F3 8F 00 7C 29 55 40) = 5.7842004815199E+36 hex: 9A 64 2E 68 FC 67 91 47
Math.Log(304590.029584136 hex: C6 4D 4B 1E 38 97 12 41) = 12.6267219860911 hex: 14 E4 43 B4 E1 40 29 40
Ubuntu 16.04, mono 5.2.0.224:
Math.Exp(84.6481934934384 hex: 0E F3 8F 00 7C 29 55 40) = 5.7842004815199E+36 hex: 99 64 2E 68 FC 67 91 47
Math.Log(304590.029584136 hex: C6 4D 4B 1E 38 97 12 41) = 12.6267219860911 hex: 15 E4 43 B4 E1 40 29 40
Could you suggest any ideas how to deal with it? How to make such fundamentals behave in the same way on different OS?