Home > Back-end >  Slow Kalman Filter - How to speed up calculating inverse of 2x2 matrix (np.linalg.inv())?
Slow Kalman Filter - How to speed up calculating inverse of 2x2 matrix (np.linalg.inv())?

Time:11-25

I am currently working on an image processing project and I am using a Kalman filter for the algorithm, among other things. However, the computation time of the Kalman filter is very slow compared to other software components, despite the use of numpy.

The predict function is very fast. The update function, however, is not. I think the reason for that could be the calculation of the inverse of the 2x2 matrix np.linalg.inv().

Does anyone have an idea for a faster calculation? Possibly hardcoded or how to rearrange the equation to avoid the inverse calculation?

I also appreciate other comments on how to get the code faster. I may have overlooked something as well.

Thank you very much in advance!

KalmanFilter.py:

class KalmanFilter(object):
    def __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas):
        """
        :param dt: sampling time (time for 1 cycle)
        :param u_x: acceleration in x-direction
        :param u_y: acceleration in y-direction
        :param std_acc: process noise magnitude
        :param x_std_meas: standard deviation of the measurement in x-direction
        :param y_std_meas: standard deviation of the measurement in y-direction
        """

        # Define sampling time
        self.dt = dt

        # Define the  control input variables
        self.u = np.array([u_x,u_y])

        # Intial State
        self.x = np.array([0.,0.,0.,0.,0.,0.])   # x, x', x'', y, y', y''

        # Define the State Transition Matrix A
        self.A = np.array([[1., self.dt, 0.5*self.dt**2., 0., 0., 0.],
                            [0., 1., self.dt, 0., 0., 0.],
                            [0., 0., 1., 0., 0., 0.],
                            [0., 0., 0., 1., self.dt, 0.5*self.dt**2.],
                            [0., 0., 0., 0., 1., self.dt],
                            [0., 0., 0., 0., 0., 1.]])

        # Define the Control Input Matrix B
        self.B = np.array([[(self.dt**2.)/2., 0.],
                            [0.,(self.dt**2.)/2.],
                            [self.dt,0.],
                            [0.,self.dt]])

        # Define Measurement Mapping Matrix
        self.H = np.array([[1., 0., 0., 0., 0., 0.],
                            [0., 0., 0., 1., 0., 0.]])

        # Initial Process Noise Covariance
        self.Q = np.array([[(self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2., 0., 0., 0.],
                            [(self.dt**3.)/2., self.dt**2., self.dt, 0., 0., 0.],
                            [(self.dt**2.)/2., self.dt, 1., 0., 0., 0.],
                            [0., 0., 0., (self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2.],
                            [0., 0., 0., (self.dt**3.)/2., self.dt**2., self.dt],
                            [0., 0., 0., (self.dt**2.)/2., self.dt, 1.]]) * std_acc**2.

        # Initial Measurement Noise Covariance
        self.R = np.array([[x_std_meas**2.,0.],
                           [0., y_std_meas**2.]])

        # Initial Covariance Matrix
        self.P = np.array([[500., 0., 0., 0., 0., 0.],  
                            [0., 500., 0., 0., 0., 0.],
                            [0., 0., 500., 0., 0., 0.],
                            [0., 0., 0., 500., 0., 0.],
                            [0., 0., 0., 0., 500., 0.],
                            [0., 0., 0., 0., 0., 500.]])

        # Initial Kalman Gain                
        self.K = np.zeros((6, 2))

        # Initial System uncertainity
        self.S = np.zeros((2, 2))

    def predict(self):

        # Update time state
        if self.A is not None and self.u[0] is not None:
            self.x = dot(self.A, self.x)   dot(self.B, self.u)
        else:
            self.x = dot(self.A, self.x)

        # Calculate error covariance
        self.P = np.dot(np.dot(self.A, self.P), self.A.T)   self.Q
        return self.x[0], self.x[3]

    def update(self, z):

        self.S = np.dot(self.H, np.dot(self.P, self.H.T))   self.R

        # Calculate the Kalman Gain
        self.K = np.dot(np.dot(self.P, self.H.T), np.linalg.inv(self.S))

        self.x = self.x   np.dot(self.K, (z - np.dot(self.H, self.x)))

        I = np.eye(self.H.shape[1])

        # Update error covariance matrix
        helper = I - np.dot(self.K, self.H)
        self.P = np.dot(np.dot(helper, self.P), helper.T)   np.dot(self.K, np.dot(self.R, self.K.T))

        return self.x[0], self.x[3]

  • I replaced all existing functions with numpy, but not much changed.
  • In my previous search on the internet, I found that np.linalg.inv() could be the reason why it's slow. But I can't find a solution to get rid of it.

CodePudding user response:

You might get some speedup this way:

def __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas):
    
    # your existing code

    self.I = np.eye(self.H.shape[1])

And,

def update(self, z):
    # you can cut down on 1 dot product if you save [email protected] in an intermediate variable
    P_HT = np.dot(self.P, self.H.T)

    self.S = np.dot(self.H, P_HT)   self.R

    # Calculate the Kalman Gain
    self.K = np.dot(P_HT, np.linalg.inv(self.S))

    self.x = self.x   np.dot(self.K, (z - np.dot(self.H, self.x)))

    # Update error covariance matrix >>> use self.I
    helper = self.I - np.dot(self.K, self.H)
    self.P = np.dot(np.dot(helper, self.P), helper.T)   np.dot(self.K, np.dot(self.R, self.K.T))

    return self.x[0], self.x[3]
  • Related