Home > Software design >  Function to Convert Square Matrix to Upper Hessenberg with Similarity Transformations
Function to Convert Square Matrix to Upper Hessenberg with Similarity Transformations

Time:12-16

I am attempting to translate a MATLAB function to Python from Timothy Sauer, Numerical Analysis Second Edition, page 546, Program 12.8. The original function receives a square matrix and returns a matrix with the same eigenvalues but in Upper Hessenberg form. The original function creates Householder reflectors to produce zeros in the offdiagonals of the matrix and performs similarity transformations on the original matrix to get it to upper hessenberg form.

My Python translation succeeds only in obtaining the eigenvalues for 3x3 matrices but not for 4x4 matrices. Would anyone know the cause of the error? I pasted my code with success and failing cases below. Thank you.



import numpy as np
import math

norm = lambda v:math.sqrt(np.sum(v**2))

def upper_hessenberg(A):
    '''
    Translated from Timothy Sauer, Numerical Analysis Second Edition, page 546, Program 12.8
    Input:  Square Matrix, A
    Output: B, a Similar Matrix with Same Eigenvalues as A except in Upper Hessenberg form
            V, a matrix containing the reflectors used to produce zeros in the off diagonals
    '''
    rows, columns   = A.shape
    B               = A[:,:].astype(np.float)                    #will store the similar matrix
    V               = np.zeros(shape=(rows,columns),dtype=float) #will store the reflectors

    for column in range(columns-2): #start from the 1st column end at the third to last column
        row                 = column
        x                   = B[row 1: ,column] #decapitate the column
        reflection_of_x     = np.zeros(len(x))  #first entry is the norm, followed by 0s
        if abs(norm(x)) <= np.finfo(float).eps: #if there are already 0s inthe offdiagonals skip this column
            continue
        reflection_of_x[0]  = norm(x)

        v   = reflection_of_x - x # v, (the difference vector) represents the line connecting the original column to the reflection of the column (see Timothy Sauer Num Analysis 2nd Edition Figure 4.11 Householder reflector)
        v   = v/norm(v) #normalize to length of 1 (unit vector)
        V[:len(v), column]    = v #save the reflector in an upper triangular matrix called V
        #verify with x-2*(x @ v * v) should equal a vector with all zeros except the leading entry

        column_projections   =  np.outer(v , v @ B[row 1:, column:])    #project each col onto difference vector
        B[row 1:, column:]   = B[row 1:, column:] - (2 * column_projections)
        row_projections      = np.outer(v, B[row:, column   1:] @ v).T  #project each row onto difference vector
        B[row:, column   1:] = B[row:, column   1:] - (2 * row_projections)
    return V, B

# Algorithm succeeds only with 3x3 matrices
eigvectors = np.array([
    [1,3,2],
    [4,5,6],
    [7,8,9],
])
eigvalues = np.array([
    [4,0,0],
    [0,3,0],
    [0,0,2]
])
M = eigvectors @ eigvalues @ np.linalg.inv(eigvectors)
print("The expected eigvals :", np.linalg.eigvals(M))
V,B = upper_hessenberg(M)
print("For 3x3 matrices, The function successfully produces these eigvals",np.linalg.eigvals(B))

#But with 4x4 matrices it fails
eigvectors = np.array([
    [1,3,2,4],
    [4,5,6,2],
    [7,8,9,5],
    [5,2,7,8]    
])
eigvalues = np.array([
    [4,0,0,0],
    [0,3,0,0],
    [0,0,2,0],
    [0,0,0,1]
])
M = eigvectors @ eigvalues @ np.linalg.inv(eigvectors)
print("The expected eigvals :", np.linalg.eigvals(M))
V,B = upper_hessenberg(M)
print("For 4x4 matrices, The function fails to obtain correct eigvals",np.linalg.eigvals(B))

CodePudding user response:

Your error is that you try to be too efficient. While the last rows are indeed increasingly reduced with leading zeros, this is not the case for the last columns. So in row_projections you need to remove the limiter row:, change to B[:, column 1:].


You are using the unstable variant of the "improved" Householder reflector. The older version would use the larger of x_refl - x and x_refl x by setting reflection_of_x[0] = -np.sign(x[0])*norm(x) (or remove all minus signs there).

The stable variant of the improved reflector would use the binomial trick in the normalization of x_refl - x if this difference becomes too small.

x_refl - x = [ norm(x) - x[0], - x[1:] ]
           = [ norm(x[1:])^2/(norm(x)   x[0]), - x[1:] ]


(x_refl - x)/norm(x_refl - x)

              [ norm(x[1:]), - (norm(x) x[0])*(x[1:]/norm(x[1:])) ] 
           = -----------------------------------------------------
                     sqrt(2*norm(x)*(norm(x) x[0])) 

While the parts may have wildly different scales, no catastrophic cancellation happens for x[0]>0.

See the discussion about the same algorithm from Golub/van Loan 4th ed. in for further details and opinions and the code from that book.

  • Related