I am observing some strange behavior regarding the results of the following code:
namespace Test {
class Program {
private static readonly MethodInfo Tan = typeof(Math).GetMethod("Tan", new[] { typeof(double) });
private static readonly MethodInfo Log = typeof(Math).GetMethod("Log", new[] { typeof(double) });
static void Main(string[] args) {
var c1 = 9.97601998143507984195821336470544338226318359375d;
var c2 = -0.11209109500765944422706610339446342550218105316162109375d;
var result1 = Math.Pow(Math.Tan(Math.Log(c1) / Math.Tan(c2)), 2);
var p1 = Expression.Parameter(typeof(double));
var p2 = Expression.Parameter(typeof(double));
var expr = Expression.Power(Expression.Call(Tan, Expression.Divide(Expression.Call(Log, p1), Expression.Call(Tan, p2))), Expression.Constant(2d));
var lambda = Expression.Lambda<Func<double, double, double>>(expr, p1, p2);
var result2 = lambda.Compile()(c1, c2);
var s1 = DoubleConverter.ToExactString(result1);
var s2 = DoubleConverter.ToExactString(result2);
Console.WriteLine("Result1: {0}", s1);
Console.WriteLine("Result2: {0}", s2);
}
}
The code compiled for x64 gives the same result:
Result1: 4888.95508254035303252749145030975341796875
Result2: 4888.95508254035303252749145030975341796875
But when compiled for x86 or Any Cpu, the results differ:
Result1: 4888.95508254035303252749145030975341796875
Result2: 4888.955082542781383381225168704986572265625
Why does result1
stay the same while result2
depends on the target architecture? Is there any way to make result1
and result2
stay the same on the same architecture?
The DoubleConverter
class is taken from http://jonskeet.uk/csharp/DoubleConverter.cs. Before you tell me to use decimal
I don't need more precision, I just need the results to be consistent. The target framework is .NET 4.5.2 and the test project was built in debug mode. I am using Visual Studio 2015 Update 1 RC on Windows 10.
Thanks.
EDIT
At user djcouchycouch's suggestion I tried to further simplify the example:
var c1 = 9.97601998143507984195821336470544338226318359375d;
var c2 = -0.11209109500765944422706610339446342550218105316162109375d;
var result1 = Math.Log(c1) / Math.Tan(c2);
var p1 = Expression.Parameter(typeof(double));
var p2 = Expression.Parameter(typeof(double));
var expr = Expression.Divide(Expression.Call(Log, p1), Expression.Call(Tan, p2));
var lambda = Expression.Lambda<Func<double, double, double>>(expr, p1, p2);
var result2 = lambda.Compile()(c1, c2);
x86 or AnyCpu, Debug:
Result1: -20.43465311535924655572671326808631420135498046875
Result2: -20.434653115359243003013034467585384845733642578125
x64, Debug:
Result1: -20.43465311535924655572671326808631420135498046875
Result2: -20.43465311535924655572671326808631420135498046875
x86 or AnyCpu, Release:
Result1: -20.434653115359243003013034467585384845733642578125
Result2: -20.434653115359243003013034467585384845733642578125
x64, Release:
Result1: -20.43465311535924655572671326808631420135498046875
Result2: -20.43465311535924655572671326808631420135498046875
The point is that results vary between Debug, Release, x86 and x64, and the more complicated the formula, the more likely it would cause bigger deviations.