While migrating to Python from Matlab, I get different results for matrix multiplication and exponentiation.
This is a simple softmax classifier implementation. I run the Python code, export the variables as a mat file, and run the original Matlab code, load the variables exported from Python, and compare them.
Python code:
f = np.array([[4714, 4735, 4697], [4749, 4748, 4709]])
f = f.astype(np.float64)
a = np.array([[0.001, 0.001, 0.001], [0.001, 0.001, 0.001], [0.001, 0.001, 0.001]])
reg = f.dot(a)
omega = np.exp(reg)
sumomega = np.sum(omega, axis=1)
io.savemat('python_variables.mat', {'p_f': f,
'p_a': a,
'p_reg': reg,
'p_omega': omega,
'p_sumomega': sumomega})
Matlab code:
f = [4714, 4735, 4697; 4749, 4748, 4709];
a = [0.001, 0.001, 0.001; 0.001, 0.001, 0.001; 0.001, 0.001, 0.001];
reg = f*a;
omega = exp(reg);
sumomega = sum(omega, 2);
load('python_variables.mat');
I compare the results by checking out the following:
norm(f - p_f) = 0
norm(a - p_a) = 0
norm(reg - p_reg) = 3.0767e-15
norm(omega - p_omega) = 4.0327e-09
norm(omega - exp(p_f*p_a)) = 0
So the difference seems to be caused by the multiplication, and it gets much larger with exp(). And my original data matrix is larger than this. I get much larger values of omega:
norm(reg - p_reg) = 7.0642e-12
norm(omega - p_omega) = 1.2167e+250
This also causes that in some cases sumomega goes to inf or zero in Python but not in Matlab, so the classifier outputs differ.
What am I missing here? How can I fix to get the exact same results?