Home > OS >  How to calibrate camera and use it in real time?
How to calibrate camera and use it in real time?

Time:11-23

I am trying to calibrate two cameras. I want to calibrate each one individually. At this point, my script can calibrate both cameras successfully. But now I want to use those calibrated cameras in real time. the code that I am using is the one available in the OpenCV documentation. Below is the code. I'll just share this part because it's the one that it's not working as I want.

def calibrateCamera(self, chessboardRows=9, chessboardCols=6, imshow=False):

    self.chessboardRows = chessboardRows
    self.chessboardCols = chessboardCols
    self.imshow = imshow
    chessboardSize = (self.chessboardRows, self.chessboardCols)
    criteria = (cv.TERM_CRITERIA_EPS   cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
    
    objp = np.zeros((self.chessboardCols*self.chessboardRows,3), np.float32)
    objp[:,:2] = np.mgrid[0:self.chessboardRows,0:self.chessboardCols].T.reshape(-1,2)
    
    objpoints = [] 
    imgpoints = [] 

    for path, index in zip(self.paths, self.indices):
        images = glob.glob(path   "*.png")

        for img in images:
            frame = cv.imread(img)
            gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
            ret, corners = cv.findChessboardCorners(gray, chessboardSize, None)

            if ret == True:
                objpoints.append(objp)
                corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
                imgpoints.append(corners2)
                cv.drawChessboardCorners(frame, chessboardSize, corners2, ret)
                if self.imshow == True:
                    cv.imshow(f"Calibrated images, Camera{index}", frame)
                    cv.waitKey(0)
            
            if ret == False:
                print("No pattern detected")
                break

        ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

        print(f"Camera{index} matrix\n", mtx)
        print(f"Camera{index} distortion coefficients\n", dist)

        h,  w = frame.shape[:2]
        newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))

        mapx, mapy = cv.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
        dst = cv.remap(frame, mapx, mapy, cv.INTER_LINEAR)

        x, y, w, h = roi
        dst = dst[y:y h, x:x w]
        cv.imshow('calibresult.png', dst)
        
        k = cv.waitKey(0)

Can anyone help me to use this "remap" in real time? And, lastly, is there any limitation in terms of frame rate to use this kind of method in real time? Thanks in advance,

CodePudding user response:

From calibration (OpenCV's calibrateCamera(), not your own function), you gain "intrinsics", i.e. camera matrix and distortion coefficients.

Store those intrinsics.

Then call initUndistortRectifyMap() with those intrinsics. You receive lookup maps suitable for remap(). You do this once, not for every video frame.

Then you use remap() on video frames, using those maps.

remap() of an entire image is fast enough for real-time processing but it has some cost still.

If you can, do your processing on untouched camera images (those frames you have before you call remap()). Then undistort whatever point data you get from your processing. Undistorting points is also not cheap, but cheaper if done on a few points instead of an entire image.

CodePudding user response:

As mentioned by Christoph, you should use cv.initUndistortRectifyMap only once, outside your loop, to generate the map. Then, at each frame, you can use cv.remap.

Remapping (or undistorting) the entire image comes at a cost (especially for large images). Working on distorted images and only undistorting some selected points might be a better options. The function you can use to do so is cv.undistortPoints.

More information is available in the documentation of OpenCV.

https://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e

  • Related