0

Basically, while translating code from Matlab to Python, I realised that there was a difference of the order of e-15 in my results.

Then I realised that all of this is caused by the exp() functions yielding different results (exp() in Matlab, exp() in Python Math, exp() in Numpy). It seems like a small difference, but this slight difference accumulates over iterations and the final results end up exploding.

For instance, if you write exp(2.34983545) in both Python and Matlab, and export the result from Matlab to python (or viceversa), and compare it, you can see that it yields different results (again, order of e-15).

Please let me know if anyone has had this problem and if there is a way to fix it!

The simple code in matlab:

exp_result = exp(2.34983545)
save('exp_result.mat','exp_result')

The simple code in Python:

import numpy as np
import math
from scipy.io import loadmat

np_exp = np.exp(2.34983545)
math_exp = math.exp(2.34983545)

matlab_data = loadmat('exp_result.mat')
matlab_exp = matlab_data['exp_result']

print(np_exp - matlab_exp)
print(math_exp - matlab_exp)

#output yields -1.7763568394002505e-15

I would like to add that I have compared 500 million elements using the loadmat method, and the accuracy is pefrect for all results, except for the exp() functions.

I am aware of numerical impercisions due to floating point operations, but how can this be fixed in this scenario?

Thanks in advance!

  • `I am aware of numerical impercisions due to floating point operations` then you know what the problem is. The way the internal `exp` algorithms handle floating point values must be subtly different between the two independent languages, you can't assume they will give the same result to a infinite precision – Wolfie May 04 '23 at 16:14
  • 2
    Moreover, the "fix" is to never use precise equality to assess whether the two floating point values are "equal", instead compare the absolute difference within some tolerance near machine precision – Wolfie May 04 '23 at 16:20
  • @Wolfie How does that "fix" of yours fix the issue (that "this slight difference accumulates over iterations and the final results end up exploding")? – Kelly Bundy May 04 '23 at 16:28
  • 3
    Without a [mcve] of what the iterative process is, we can't suggest anything more. The code would have to be written in such a way that a difference that small does not have a large impact. If an algorithm is truly reliant on that precision (asking for trouble) then you would have to use something like [VPA in MATLAB](https://uk.mathworks.com/help/symbolic/vpa.html) – Wolfie May 04 '23 at 16:35
  • Thank you @Wolfie for your comments. I'll end up seeing the MATLAB VPA to a higher degree of precision (quadruple maybe) and hence try to solve the problem. But it's simply a shame because all other stuff is working to the same exact precision (I'm speaking about sine, cosine, matrix multiplication, etc.). – Sf4r4di May 05 '23 at 14:15

0 Answers0