Home > Enterprise >  How to improve ArUco marker tracking on mp4 video footage
How to improve ArUco marker tracking on mp4 video footage

Time:09-17

I am using ArUco markers to track two points in my research project experimental setup. One marker is on the base of a robot, and the other is on the robot tip. The footage is recorded using OpenCV at 40 FPS, and the entire experiment lasts for 72 seconds. This would mean I expect to see around 2880 datapoints for each coordinate value (x, y, z; I do not need rotation), however when the tip marker is in motion is when I lose points. Is there a way to slow down the playback on the video or slow down OpenCV's processing of the video so that I could get these missing points? I've included a gif of the robot, it is a generally arc-shape trajectory. Thank you. robot bending with two aruco markers attached

EDIT: Below I have added my function where I get the poses of my two ArUco markers.

kernel = np.ones((5,5), np.uint8)

def get_aruco_pose(im):
    global rvec, tvec
    im = cv.dilate(im, kernel, iterations=1)

    corners, ids, rejectedCandidates = cv.aruco.detectMarkers(im, 
                                                              DICT, 
                                                              parameters=PARAM)
    transf = None
    if len(corners) > 0:
        # flatten the ArUco IDs list
        ids = ids.flatten()
        # loop over the detected ArUCo corners
        for (markerCorner, markerID) in zip(corners, ids):
            rvec, tvec, _ = cv.aruco.estimatePoseSingleMarkers(markerCorner, ARUCO_LENGTH_MM, CAMERA_MATRIX, DIST_COEFF)
            # Draw aruco
            show_aruco(im, markerCorner, markerID) # uncomment to show markers in video
            rmat, _ = cv.Rodrigues(rvec)
            tvec    = tvec.reshape(3, 1)
            transf  = np.concatenate((rmat, tvec), axis = 1)

            if markerID == 3:
                tvec_bs.append(tvec)
            if markerID == 4:
                tvec_fs.append(tvec)
    return tvec

CodePudding user response:

Actually this problem is not related with the software but the hardware. Since your system has a speed, it causes motion blur in your images. In your code loop, it already checks each frame to detect aruco but it seems not successfull because of the motion blur.

Your camera has a parameter which we call it exposure time. This is the time limit you are capturing each image you get. Exposure time is related with the target objects speed. There is not a constant ration not to get blurry images but you can only choose available camera by your experiences.

So what you can do to figure out the problem:

  • You can change the camera with a higher fps
  • You can decrease the speed of the platform to decrease motion blur

CodePudding user response:

enter image description here

the problem is evident in the attached image. Another approach to solving the problem would be to recognize the problem. You now have different markers visually, but you know they represent the same marker in motion and rotation. So you can recognize them.

  • Related