I need to solve a generalized eigenvalue problem of the form
K @ v = w * M @ v
(where K and M are real symmetric matrices and w is an eigenvalue to the eigenvector v).
I can do this using W, V = scipy.linalg.eig(K, b=M)
. However, I would also like the returned vectors in V
to be orthonormal wrt to the scalar product induced by matrix M (in my case, a mass matrix). That is, I would like
V.T @ M @ V
to be close to the identity. Is there any fast and clean option for that in scipy/numpy etc, or is the only way to implement some Gram-Schmidt scheme oneself?
CodePudding user response:
You can use scipy.linalg.eigh
.
For example,
In [1]: import numpy as np
In [2]: from scipy.linalg import eigh
In [3]: K = np.array([[4, 0, -1, 3], [0, 3, 0, 1], [-1, 0, 9, -2], [3, 1, -2, 9]])
In [4]: M = np.array([[14, 0, 3, 1], [0, 16, -1, 4], [3, -1, 10, 5], [1, 4, 5, 8]])
In [5]: evals, V = eigh(K, M)
In [6]: evals
Out[6]: array([0.18497917, 0.20745448, 0.50002279, 3.90342693])
In [7]: V
Out[7]:
array([[-0.05899209, -0.25680325, -0.01369778, -0.08501946],
[-0.25120648, 0.03191235, -0.01667211, 0.13305264],
[ 0.00744091, -0.02271455, 0.19957286, 0.37023263],
[ 0.03366447, 0.08755881, 0.1831409 , -0.44242341]])
Verify that V.T @ M @ V
is approximately the identity:
In [8]: print(V.T @ M @ V)
[[ 1.00000000e 00 5.55111512e-17 -6.93889390e-17 -8.32667268e-17]
[ 8.32667268e-17 1.00000000e 00 -9.71445147e-17 1.38777878e-16]
[-3.46944695e-17 -5.55111512e-17 1.00000000e 00 0.00000000e 00]
[-1.38777878e-16 1.24900090e-16 1.66533454e-16 1.00000000e 00]]
That last result is a bit easier to read if we suppress the tiny values:
In [9]: with np.printoptions(suppress=True):
...: print(V.T @ M @ V)
...:
[[ 1. 0. -0. -0.]
[ 0. 1. -0. 0.]
[-0. -0. 1. 0.]
[-0. 0. 0. 1.]]