Home > Mobile >  Why are eigenvector computations in matlab and python not matching?
Why are eigenvector computations in matlab and python not matching?

Time:10-14

Matlab:

m1 = [ 333.33333333  83.33333333   0. ;  83.33333333 333.33333333  83.33333333 ;  0.   3.33333333 166.66666667]
k1 = [ 800. -400.    0.; -400.  800. -400.;   0. -400.  400.]
[vec, val] = eig(m1, k1)

vec =

   -0.0106   -0.0289    0.0394
    0.0183   -0.0000    0.0683
   -0.0211    0.0289    0.0789

python:

import scipy
import numpy as np
m1 = np.array[[333.33333333,  83.33333333,  0.], [ 83.33333333, 333.33333333, 83.33333333], [0., 83.33333333, 166.66666667]]
k1 = np.array[[800., -400.,  0.], [-400.,  800., -400.], [0., -400.,  400.]]
val, vec = scipy.linalg.eig(m1, k1)

vec = 
[[ 3.53553391e-01, -7.07106781e-01,  3.53553391e-01], 
[ 6.12372436e-01,  1.88119544e-16, -6.12372436e-01], 
[ 7.07106781e-01,  7.07106781e-01,  7.07106781e-01]]

So matlab and python eigenvectors vec are not matching. Matlab document says

[V,D] = eig(A,B) returns diagonal matrix D of generalized eigenvalues and full matrix V whose columns are the corresponding right eigenvectors, so that A*V = B*V*D.

and m1 * vec = k1 * vec * val satisfies for matlab output but not for python output. How can I get similar eigenvectors as given by matlab using python, numpy?

CodePudding user response:

Eigenvectors are not unique; if x is an eigenvector, then a * x is still an eigenvector for any nonzero scalar a.

Look at the two results from python and matlab; the first column of vec from matlab looks like a scaled version of the third column of vec from python. The second column from matlab seems to be a scaled version of the second column from python (recall 1.88119544e-16 is almost zero). The third column corresponds to the first column. So my guess is that val from matlab is a reversed version of val from python; I can't verify this because I don't have matlab right now, but you can verify this.

Also in python, remember * is element-wise multiplication (i.e., .* in matlab). You can use @ for matrix multiplication.

import scipy.linalg
import numpy as np
m1 = np.array([[333.33333333,  83.33333333,  0.], [ 83.33333333, 333.33333333, 83.33333333], [0., 83.33333333, 166.66666667]])
k1 = np.array([[800., -400.,  0.], [-400.,  800., -400.], [0., -400.,  400.]])
val, vec = scipy.linalg.eig(m1, k1)

print(m1 @ vec)
print(k1 @ vec * val)

As you can see in the following output, we have the same output (the imaginary part is 0.j, so you can ignore that).

Output:

[[ 1.68882167e 02 -2.35702260e 02  6.68200939e 01]
 [ 2.92512493e 02 -3.14270210e-09 -1.15735798e 02]
 [ 1.68882167e 02  1.17851130e 02  6.68200939e 01]]
[[ 1.68882167e 02 0.j -2.35702260e 02 0.j  6.68200939e 01 0.j]
 [ 2.92512493e 02 0.j -3.14263578e-09 0.j -1.15735798e 02 0.j]
 [ 1.68882167e 02 0.j  1.17851130e 02 0.j  6.68200939e 01 0.j]]
  • Related