Home > front end >  Problem with type Point3f in function cv2.calibrateCamera PYTHON
Problem with type Point3f in function cv2.calibrateCamera PYTHON

Time:12-18

Goodmorning everyone!

I am struggling with the opencv function calibrateCamera where I retrieve the following error:

OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3350: error: (-210:Unsupported format or combination of formats) objectPoints should contain vector of vectors of points of type Point3f in function 'cv::collectCalibrationData'

I tried to find a solution, but could not find anything anywhere except for type issues or length issues. I use a np.array of float 32 type for both objPoints and imgPoints:

    objpoints = np.array(objpoints, dtype = np.float32)
    imgpoints = np.array(imgpoints, dtype = np.float32)
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, imsize, None, None )

The Variable Explorer gives the following values for the used variables:

    In: objpoints
    Out: 
    array([[  0. , 246.5,   0. ],
           [ 14.5, 246.5,   0. ],
           [ 43.5, 246.5,   0. ],
           ...,
           [174. ,   0. ,   0. ],
           [319. ,   0. ,   0. ],
           [333.5,   0. ,   0. ]], dtype=float32)
    
    In: imgpoints
    Out: 
    array([[1310., 1032.],
           [1258., 1032.],
           [1154., 1032.],
           [1206., 1032.],
           ...,
           [ 739.,  134.],
           [ 686.,  133.],
           [ 162.,  132.],
           [ 110.,  132.]], dtype=float32)
    
    In: imsize
    Out: (1080, 1440)
    
    In: len(imgpoints)
    Out: 440
    In: len(objpoints)
    Out: 440

I also tried switching objpoints and imgpoints in the calibrateCamera function, but that resulted in the same error message. As I read a lot of solutions for this function in C I want to point out that this is a Python related question.

I hope someone can help me with this! Thanks in advance!

CodePudding user response:

ImagePoints and objectPoints should contain vector of vectors of points. This means that you should have image and object points for multiple images.In your case I see there is image and object points for just one image. What you can do is to use np.newaxis in this case as follows. However to make the calibration do better, try to have multiple images.

objpoints = np.array(objpoints, dtype = np.float32)[np.newaxis]
imgpoints = np.array(imgpoints, dtype = np.float32)[np.newaxis]
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, imsize, None, None )

CodePudding user response:

This function can take multiple views (let's say N), each possibly showing a different object.

"vector of vector of points" means each "vector of points" is allowed to have a different length, because maybe that object is different from the objects seen in other views.

You can pass a single array with outer dimension (N, ...), if all your views show the same object (the arrays have the same length), but you should pass a list of arrays in the general case, so each array can have its own suitable size.

objApoints = np.array(objApoints, dtype=np.float32)
img1points = np.array(img1points, dtype=np.float32)

# have another picture? img2points...
# it may show the same object (objApoints) or a different object (objBpoints)

objpointslist = [objApoints]
imgpointslist = [img1points]

cv2.calibrateCamera(objpointslist, imgpointslist, ...)

CodePudding user response:

SOLVED: I had to put the objpoints and imgpoints inside a list before I made a numpy array from them:

objpoints = np.array([objpoints], dtype = np.float32)
imgpoints = np.array([imgpoints], dtype = np.float32)

Instead of:

objpoints = np.array(objpoints, dtype = np.float32)
imgpoints = np.array(imgpoints, dtype = np.float32)
  • Related