This situation really confused me: I have one .NET application doing some floating number calculation. It seems having problem to do the division, Math.Pow, and Math.Exp. For example:
double _E1 = 20616579.5;
double sub = 19000;
double total = 19623;
double percent = sub / total; //0.96825152635574341
double _result1 = Math.Pow(_E1, percent); //12078177.0
double _result2 = Math.Exp(percent * Math.Log(_E1)); //12078184.730266357
all three results, percent, _result1, and _result2 are incorrect (you can use calculator to verify).
I have another .NET program running the same code, on the same machine, gives the correct results:
- _result1 = 12078180.370260473
- _result2 = 12078180.370260468
- percent = 0.96825154155837534
By just looking at the result for percent, the precision only goes to 7 decimal digits. The Double usually goes to 16 decimal digits.
I have another even simpler example as follows:
_outcome equals to some ridiculous number. but it shows correct result when I put cursor on top of "*".
Please help, drove me crazy in the last few days.
UPDATE: the problem solved. DirectX was the culprit. see: Can floating-point precision be thread-dependent?