I'm attempting to verify a simple eigenvalue / eigenvector problem using TensorFlow.
Import tensorflow:
import tensorflow as tf # version 2.6.0
For example, take a simple Matrix
ex1 = tf.convert_to_tensor([[0,1],[-2,-3]],dtype=tf.float32)
print(ex1)
Output:
tf.Tensor(
[[ 0. 1.]
[-2. -3.]], shape=(2, 2), dtype=float32)
I then calculate the eigenvalues and eigenvectors using tf.linalg.eigh:
eigVals, eigVects = tf.linalg.eigh(ex1)
print(tf.linalg.diag(eigVals),eigVects)
Output:
tf.Tensor(
[[-4. 0. ]
[ 0. 1.0000001]], shape=(2, 2), dtype=float32) tf.Tensor(
[[ 0.4472136 0.8944272]
[ 0.8944272 -0.4472136]], shape=(2, 2), dtype=float32)
Now, since the eigenvalues and eigenvectors for A are defined as Av = Lv, I can calculate Av and Lv and should get matching answers (within rounding error):
Calculating Av:
tf.matmul(ex1,eigVects)
Output:
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[ 0.8944272 , -0.4472136 ],
[-3.5777087 , -0.44721365]], dtype=float32)>
and calculating Lv:
tf.matmul(tf.linalg.diag(eigVals),eigVects)
Output:
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-1.7888544 , -3.5777087 ],
[ 0.8944273 , -0.44721365]], dtype=float32)>
Why don't these match?
CodePudding user response:
According to the documentation of eigh:
Computes the eigen decomposition of a batch of self-adjoint matrices.