Home > Blockchain >  Is there a way to avoid inexact results when using linear algebra with numpy?
Is there a way to avoid inexact results when using linear algebra with numpy?

Time:01-07

I'm trying to run the following code, which is simple matrix algebra:

import numpy as np
from numpy import linalg

A = np.array([[1,2],[1,-1],[1,1]])
b = np.array([[1],[-1],[5]])

r = linalg.inv(A.transpose()@A)@A.transpose()@b

print(r)

The output I obtain is:

[[1.]
[1.]]

However, I'm expecting a strict [[1],[1]], which is what I obtain when doing the calculation by hand. Doing further operations with r in this form starts giving me incorrect results, for example, if I compute A@r I obtain:

[[3.00000000e 00]
 [3.33066907e-16]
 [2.00000000e 00]]

Instead of the expected [[3],[0],[2]]. Is there another way to do matrix algebra to avoid this issue, maybe having python use fractions instead of raw computing for taking the inverse?

CodePudding user response:

Turns out that it is better to do matrix algebra with the sympy library if you are doing typical mathematics. We get our desired (and more readable) result with:

import sympy as sy

A = sy.Matrix([[1,2],[1,-1],[1,1]])
b = sy.Matrix([[1],[-1],[5]])

r = (A.T*A)**-1*A.T*b

print(r)

CodePudding user response:

This problem occurs due to floating-point number imprecision. See here and here.

Answer found through looking at this question.

import numpy as np
from numpy import linalg

A = np.array([[1,2],[1,-1],[1,1]],  dtype=np.intc)
b = np.array([[1],[-1],[5]],  dtype=np.intc)

r = linalg.inv(A.transpose()@A)@A.transpose()@b

print(r.round().astype(int))
print([email protected]().astype(int))
  • Related