1

I'm trying to calculate the eigenvalues and eigenvector of a matrix in Python. I used numpy and, as an example, did this with a matrix M:

w,v=eig(M)
idx = w.argsort()[::1]   
eigVal= w[idx]
eigVec = v[:,idx]
print(eigVal)
print("an eigen vector is:")
print(eigVec[0])
print()
nnn=M.dot(eigVec[0]) #matrix times supposed eigen vector
for i in range(lenght):
  nnn[i]=nnn[i]/eigVal[0]
print("The result of M*vector/eigenvalue is:")
print(nnn)

And got as a result:

[452.78324098 461.88198554 468.47201706 474.43054819]
an eigen vector is:
[ 0.92852341  0.37084248 -0.01780576  0.00175573]

The result of M*vector/eigenvalue is:
[ 9.28755114e-01  3.72671398e-01 -2.29673727e-02 -9.27549232e-05]

As you can see, although similar, the resulting eigenvector after the multiplication is not that close to what numpy originally computed. How can the precision be improved?

khelwood
  • 55,782
  • 14
  • 81
  • 108
  • You can try using `longdouble` values, this will give you about 18 digits of precision. See https://stackoverflow.com/questions/25481058/how-do-i-use-the-numpy-longdouble-dtype – Barmar Feb 09 '23 at 23:12
  • 1
    I don't think this is an issue with accuracy, for example try `np.matmul(M,v[:,0]) - np.dot(w[0],v[:,0])` - that should be a very small residual – bn_ln Feb 09 '23 at 23:55

1 Answers1

1

You need to take the eigenvectors along the right axis;

w,v=eig(M)
idx = w.argsort()[::1]   
eigVal= w[idx]
eigVec = np.transpose(v[:,idx]) # <-- transpose here
print(eigVal)
print("an eigen vector is:")
print(eigVec[0])
print()
nnn=M.dot(eigVec[0]) #matrix times supposed eigen vector
nnn /= eigVal[0]
print("The result of M*vector/eigenvalue is:")
print(nnn)
bn_ln
  • 1,648
  • 1
  • 6
  • 13