I have the impression that np.linalg.eig is returning wrong eigenvectors for a 4x4 matrix. I have
M = np.array ([[ 1., 1., -1., 0], [ 1., -2., 0., 1], [ 1., 1., -1., 0],[ 0., 0., 0., 0.]])
Notice that the last line of M is composed of zeros. Therefore any eigenvector [x_1, .., x_4] with non-null eigenvalue has to have x_4 = 0. However, this is the output of np.linalg.eig() :
# eigvals, eigvects = np.linalg.eig(M)
# print ("Eigenvalues : \n", eigvals)
# print ("Eigenvectors : \n", eigvects)
Eigenvalues :
[ 0.95646559 1.656647j 0.95646559-1.656647j -1.91293118 0.j
0. 0.j ]
Eigenvectors :
[[ 0.77374282 0.j 0.77374282-0.j -0.08130353 0.j
-0.64465837 0.j ]
[ 0.19917375-0.11160644j 0.19917375 0.11160644j -0.93378468 0.j
-0.32232919 0.j ]
[-0.08274466-0.58510614j -0.08274466 0.58510614j 0.34847655 0.j
0.40291148 0.j ]
[ 0. 0.j 0. -0.j 0. 0.j
0.56407607 0.j ]]
Notice that x_4 is not 0 for the first three eigenvectors... Anyone get it ?
CodePudding user response:
Edit: When double checking I noticed that my result of np.linalg.eig(M)
looks different from yours. Are you sure you ran it on the matrix you provided?
I disagree with JohanC.
From the numpy docs np.linalg.eig:
Returns [...] v(…, M, M) array: The normalized (unit “length”) eigenvectors, such that the column v[:,i] is the eigenvector corresponding to the eigenvalue w[i].
So we expect them to be stored up to down rather than left to right.
w,v = np.linalg.eig(M)
for i in range(len(v)):
print(w[i],v[:,i])
print(np.allclose(w[i]*v[:,i],M@v[:,i]))
gives
0.41421356237309426 [-0.67859834 -0.28108464 -0.67859834 0. ]
True
8.966227405936968e-16 [0.53452248 0.26726124 0.80178373 0. ]
True
-2.4142135623730945 [ 0.35740674 -0.86285621 0.35740674 0. ]
True
0.0 [0.07064755 0.41712601 0.48777356 0.76360446]
True
So you see not only are the eigenvalues/vectors correct in the sense that they satisfy the eigenvalue equation within a reasonable error margin. Also do all but the last end in zero. And it's not surprising that the eigenvector corresponding to the eigenvalue zero does not have to have a zero in the end.
CodePudding user response:
With a 4x4 matrix where all elements are expressible as expression involving integers, fractions or square roots (but not involving imprecise floats), you could try to find exact solutions via sympy. See sympy's eigenvectors.
from sympy import Matrix
M = Matrix([[1, 1, -1, 0], [1, -2, 0, 1], [1, 1, -1, 0], [0, 0, 0, 0]])
eig = M.eigenvects()
for eigenval, multiplicity, vectors in eig:
print(f'Eigenvalue {eigenval.evalf()} ({eigenval}), multiplicity {multiplicity}')
for v in vectors:
print(' ', [val for val in v.evalf()], ' or ', [val for val in v])
Result:
Eigenvalue 0 (0), multiplicity 2
[0.666666666666667, 0.333333333333333, 1.00000000000000, 0] or [2/3, 1/3, 1, 0]
[-0.333333333333333, 0.333333333333333, 0, 1.00000000000000] or [-1/3, 1/3, 0, 1]
Eigenvalue 0.414213562373095 (-1 sqrt(2)), multiplicity 1
[1.00000000000000, 0.414213562373095, 1.00000000000000, 0] or [1, -1 sqrt(2), 1, 0]
Eigenvalue -2.41421356237309 (-sqrt(2) - 1), multiplicity 1
[1.00000000000000, -2.41421356237309, 1.00000000000000, 0] or [1, -sqrt(2) - 1, 1, 0]
For the transpose of the matrix, the output with eig = M.T.eigenvects()
is:
Eigenvalue 0 (0), multiplicity 2
[-1.00000000000000, 0, 1.00000000000000, 0] or [-1, 0, 1, 0]
[0, 0, 0, 1.00000000000000] or [0, 0, 0, 1]
Eigenvalue 0.414213562373095 (-1 sqrt(2)), multiplicity 1
[3.41421356237309, 0.414213562373095, -2.41421356237309, 1.00000000000000] or [sqrt(2) 2, -1 sqrt(2), -sqrt(2) - 1, 1]
Eigenvalue -2.41421356237309 (-sqrt(2) - 1), multiplicity 1
[0.585786437626905, -2.41421356237309, 0.414213562373095, 1.00000000000000] or [2 - sqrt(2), -sqrt(2) - 1, -1 sqrt(2), 1]