Home > front end >  Handling rounding errors in matrix multiplication in numpy
Handling rounding errors in matrix multiplication in numpy

Time:04-10

TL;DR: Matrix multiplication for state transition matrix should be norm preserving, but np.matmul does not conserve norm. How can I fix this? Is there a better python module to do so?

I have a right state transition matrix, A, i.e., s(t)A(tau)=s(t tau)

where s(t) is a column matrix which sums to 1. Also, we know that each row of A adds upto 1 as well.

We know that A^n is also a right state transition matrix for any n in natural numbers.

One way to find the steady state distribution is to compute A^n as n goes to infinity. The following snippet calculates A^(2^n):

    def p_multiplier(A,n):
        B=copy.deepcopy(A)
        for i in range(n):
            B=numpy.matmul(B,B)
        return B[0]

This works alright, but for some reason, the summation of rows start not to add upto 1, i.e., numpy starts to leak norm of A[i].

The reason is probably the rounding errors. One quick fix might be to forcefully preserve the norm of each row at each iteration:

def p_multiplier(A,n):
    B=copy.deepcopy(A)
    for i in range(n):
        B=np.matmul(B,B)
        for j in range(len(B)):
            B[j]=B[j]/sum(B[j])
    return B[0]

How scientific is this fix? Does scipy or mpmath handle this better?

Edit: To fully fulfill the MWE, you can use the following state transition matrix for A:

A=np.asarray([[0.76554539,0.13929202,0,0,0.09516258],[0.04877058,0.76996018,0.18126925,0,0],[0,0.09516258,0.76554539,0.13929202,0],[0,0,0.13929202,0.67943873,0.18126925],[0.09516258,0,0,0.04877058,0.85606684]])

CodePudding user response:

The problem comes from the fact that the operation is not numerically stable. As a result it quickly diverge (exponentially) to 0 or infinity even with relatively-small values of n like 70. You can use a diagonalization method based on eigenvalues (see here for more informations) which is far more numerically stable.

def stable_multiplier(A, n):
    d, P = numpy.linalg.eig(A)
    return (P @ numpy.diag(d**n) @ numpy.linalg.inv(P))[0].real

The number of lost (decimal) digits is in O(log10(n)) so the error is linear to n. Note that if n is big like > 1000, then you need to be very careful about the numerical stability of the overall algorithm, especially if it is an iterative process (with at of iterations). For more information, please thread this and this.

CodePudding user response:

Your A rows don't sum to 1 - exactly. And the deviation will only grow with further calculations. Floating point values are not exact.

In [162]: A=np.asarray([[0.76554539,0.13929202,0,0,0.09516258],[0.04877058,0.
     ...: 76996018,0.18126925,0,0],[0,0.09516258,0.76554539,0.13929202,0],[0,
     ...: 0,0.13929202,0.67943873,0.18126925],[0.09516258,0,0,0.04877058,0.85
     ...: 606684]])
In [163]: A
Out[163]: 
array([[0.76554539, 0.13929202, 0.        , 0.        , 0.09516258],
       [0.04877058, 0.76996018, 0.18126925, 0.        , 0.        ],
       [0.        , 0.09516258, 0.76554539, 0.13929202, 0.        ],
       [0.        , 0.        , 0.13929202, 0.67943873, 0.18126925],
       [0.09516258, 0.        , 0.        , 0.04877058, 0.85606684]])
In [164]: A.shape
Out[164]: (5, 5)
In [165]: A.sum(0)
Out[165]: array([0.90947855, 1.00441478, 1.08610666, 0.86750133, 1.13249867])
In [166]: A.sum(1)
Out[166]: array([0.99999999, 1.00000001, 0.99999999, 1.        , 1.        ])
In [167]: A1 = A@A
In [168]: A1.sum(0)
Out[168]: array([0.8530045 , 1.0033992 , 1.13436947, 0.79593261, 1.2132942 ])
In [169]: A1.sum(1)
Out[169]: array([0.99999998, 1.00000002, 0.99999998, 1.        , 1.        ])

Even if I normalize the array, the sum, viewed with full precision, is not all 1s.

In [175]: A2 = A/A.sum(1)
In [176]: A2.sum(1)
Out[176]: array([1., 1., 1., 1., 1.])
In [177]: A2.sum(1).tolist()
Out[177]: 
[0.9999999962625339,
 1.0000000046007966,
 0.9999999967038282,
 1.0000000013929202,
 1.000000000951626]
  • Related