Home > database >  Function for calculating the determinant of a matrix
Function for calculating the determinant of a matrix

Time:10-31

I want my function to calculate the determinant of input Matrix A using row reduction to convert A to echelon form, after which the determinant should just be the product of the diagonal of A. I can assume that A is an n x n np.array

This is the code that I already have:

def determinant(A):
    A = np.matrix.copy(A)
    row_switches = 0
    #  Reduce A to echelon form
    for col in range(A.shape[1]):
        if find_non_zero(A, col) != col:
            #  Switch rows
            A[[find_non_zero(A, col), col], :] = A[[col, find_non_zero(A, col)], :]
            row_switches  = 1
        #  Make all 0's below "pivot"
        for row in range(col 1, A.shape[0]):
            factor = A[row, col] / A[col, col]
            A[row, :] = A[row, :] - factor * A[col, :]

    return A.diagonal().prod() * (-1) ** row_switches


# Find first non-zero value starting from diagonal element
def find_non_zero(A, n):
    row = n
    while row < A.shape[0] and A[row, n] == 0:
        row  = 1
    return row

I then compare my results with np.linalg.det(A). The difference is manageable for random matrices of floats below 50x50 (2.8e-08 difference), but after 70x70, the difference is between 1000 and 10'000 on average. What could be the cause of this?

The other problem I have with my code is that for a Matrix of ints A = np.random.randint(low=-1000,high=1000,size=(25, 25)), the difference is even more insane:

1820098560 (mine) vs 1.0853429659737294e 81 (numpy)

CodePudding user response:

There are two issues with integer arrays and you can address them by changing the first line of your function to A = np.matrix(A, dtype=float).

  1. You risk overflowing and throwing off your results completely.
>>> np.arange(1, 10).prod() # correct
362880
>>> np.arange(1, 20).prod() # incorrect
109641728
>>> np.arange(1, 20, dtype=float).prod() # correct
1.21645100408832e 17
  1. Whatever the result of the rhs in the line A[row, :] = A[row, :] - factor * A[col, :] will be, it will be cast to an integer.
>>> a = np.zeros((3,), dtype=int)
>>> a[0] = 2.4
>>> a
array([2, 0, 0])

As for the inaccuracies with float arrays, you have to live with them because of floating arithmetic's limited precision. When the product of the diagonals gives you a number like 6.59842495617676e 17 and numpy gives 6.598424956176783e 17, you can see the results are very close. But they can only represent so many digits and when the number is very large, just a difference in the last couple of digits really means a difference in the 1000s. This will only get worse the bigger your matrices, and as a result the bigger your numbers. But in terms of relative difference, i.e., (your_method - numpy) / numpy, it's fairly good regardless the magnitude of the numbers you work with.


Stability of the algorithm

A point about your factor value when it's very small from wikipedia:

One possible problem is numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using partial pivoting, even though there are examples of stable matrices for which it is unstable.[11]

[snip]

This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value. Such a partial pivoting may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers.

If it matters, numpy uses LAPACK's LU decomposition algorithm which implements an iterative version of Sivan Toledo's recursive LU algorithm.

  • Related